Systems and methods for distributed rules processing

- Pegasystems Inc.

The invention provides in some aspects a distributed rules processing system that includes a first and second digital data processors that are coupled to one another by one or more networks. A rules base and a transactional data base are each coupled to one of the digital data processors; both may be coupled to the same digital data processor or otherwise. One or more coordination modules (e.g., “proxies”), each of which is associated with a respective one of the digital data processors, makes available to a selected one of those digital data processors from the other of those digital data processors (i) one or more selected rules from the rules base, and/or (ii) one or more data from the transactional database on which those rules are to be executed. The selected digital data processor executes one or more of the selected rules as a rules engine, executes one or more of the selected rules using a rules engine, and/or processes one or more data from the transactional database with rules executing on a rules engine.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
BACKGROUND OF THE INVENTION

The invention relates to digital data processing and, more particularly, for example, to distributed processing of rules bases.

Computer systems that facilitate business operations based on information specific to an industry or enterprise are well known in the art. These typically rely on rules identifying situations that are expected to arise during enterprise operation and the applicable responses. Such systems have been used in a range of applications, from health care to automotive repair. The rules on which they rely come from experts in the field, from the collective experience of workers on the “front line,” or a combination of these and other sources.

Though many computer systems of this sort incorporate application-specific knowledge directly into source code (using, for example, a sequence of “if . . . then . . . else” statements, or the like), more complex systems store that knowledge separately from the programs that access it. Some use “rules bases” that store application-specific information in tables, database records, database objects, and so forth. Examples of systems of this type are disclosed in commonly assigned U.S. Pat. No. 5,826,250, entitled “Rules Bases and Methods of Access Thereof” and U.S. Pat. No. 7,640,222, entitled “Rules Base Systems and Methods with Circumstance Translation,” the teachings of both of which are incorporated herein by reference.

These and other rules-based business process management (BPM) applications are commonly used in enterprise computing, for example, where they facilitate a range of business operations, from marketing to manufacturing to distribution to technical support. By way of example, a BPM application can implement data-processing workflows to support the processing of transactional data ranging from customer service requests received by retail and banking enterprises to the routing and resolution of health care claims by insurance enterprises.

With increasing frequency, enterprise software applications incorporate architectures that permit their use “in the cloud,” that is, over the Internet, with computing resources delivered up to each user on demand. In a sense, this extends the client-server model of past eras from the physical confines of the enterprise to the expanse of the world.

Where a common architecture of the past might provide for software that executes on a server, e.g., located at enterprise headquarters, and that processes requests entered by support personnel at the enterprise's branch offices, the new cloud architectures permit servicing of requests by servers located around the world. In operation, any given request by a user on a client device might as well be attended to by a server located in a neighboring state as in a neighboring country. Thus, while cloud applications are often initially tested behind an enterprise firewall, they are typically architected for final deployment outside that firewall, on a dynamically changing set of third-party servers (e.g., owned by Amazon, SalesForce, Google, or other cloud-computing providers).

BPM applications can be deployed in the cloud, like other enterprise applications. However since business process management often goes to the heart of the enterprise, chief executives, IT directors, and corporate boards have yet to fully embrace this model, mainly, for fear that storing rules bases and/or transactional data exposes them to theft or wrongful disclosure.

Other software applications are evolving similarly. Those that traditionally ran solely on the “desktop,” are now increasingly being executed in the cloud. Word processing is one example. Microsoft, Google and other software providers would as soon enterprise (and other) customers store documents and execute word processing via the cloud, as via locally deployed desktop applications. Unfortunately, this results in uneven usage of information technology resources, with network infrastructure and desktop computers being alternately overwhelmed and underutilized, depending on the cycle of the day, month and year.

An object of this invention is to provide improved systems and methods for digital data processing. A more particular object is to provide improved systems and methods for business process management, for example, rules processing.

A further object is to provide such improved systems and methods as facilitate deployment of BPM and other rules-processing applications on multiple digital data processors.

A still further object is to provide such improved systems and methods as facilitate such deployment in distributed environments, such as, for example, in cloud computing environments.

Yet a still further object is to provide such improved systems and methods as provide better security for BPM and other rules-processing applications in such distributed environments.

Still yet a further object is to provide such improved systems and methods as better utilize computing and networking resources in applications so distributed.

SUMMARY OF THE INVENTION

The foregoing are among the objects attained by the invention, which provides in some aspects a distributed rules processing system that includes first and second digital data processors that are coupled to one another by one or more networks. A rules base and a transactional data base are each coupled to one of the digital data processors; both may be coupled to the same digital data processor or otherwise.

One or more coordination modules (e.g., “proxies”), each of which is associated with a respective one of the digital data processors, makes available to a selected one of those digital data processors from the other of those digital data processors (i) one or more selected rules from the rules base, and/or (ii) one or more data from the transactional database on which those rules are to be executed. The selected digital data processor executes one or more of the selected rules as a rules engine, executes one or more of the selected rules using a rules engine, and/or processes one or more data from the transactional database with rules executing using a rules engine.

According to related aspects of the invention, the first and second digital data processors of a distributed rules processing system, e.g., of the type described above, can be disposed remotely from one another and can coupled for communication by the Internet, as well optionally by local area networks, wide area networks, and so forth. A firewall and/or other such functionality that is coupled to one or more of those networks prevents the selected digital data processor from accessing from the other digital data processor (i) the selected rules and/or (i) the data on which those rules are to be executed.

Further related aspects of the invention provide a distributed rules processing system, e.g., of the type described above, wherein one or more of the coordination modules make the selected rules and/or data available to the selected digital data processor from the other digital data processor in response to a request from the rules engine.

Thus, by way of example, in a system according to the foregoing aspects of the invention, the first digital data processor can include a rules base, e.g., for processing credit card information. The second digital data processor can, likewise, include a data base of transactional data, e.g., pertaining to opening of credit card account, purchases against the credit cards, refunds, and so forth.

According to one operational scenario of such a system (and to illustrate methods according to further aspects of the invention) a rules engine operating, for example, on the first digital data processor can utilize a proxy operating, for example, on the second digital data processor to access transactional data that is “behind the firewall” on the second digital data processor for processing by the rules engine with rules already accessible to the first data data processor (e.g., on account of its inclusion of and/or coupling to the rules base).

To that end, by way of non-limiting example, in related aspects of the invention, the coordination modules (or proxies) make the selected rules and/or data available to the selected digital data processor from the other digital data processor by opening one or more communications ports on that other digital data processor.

Continuing the above example, in a related operational scenario, a coordination module executing on the first data processor can respond to transactional data base access requests generated by the rules engine to determine whether that data base is coupled to the first digital data processor and, if not, to cooperate with the coordination module on the second digital data processor to make the transactional data available to the rules engine from the second digital data processor.

Conversely, according to the operational scenario of a system paralleling those described in the examples above, a rules engine executing on the second digital data processor can utilize a proxy operating, for example, on the first digital data processor, to access rules necessary to process transactional data already accessible to the data processor (e.g., on account of its inclusion of and/or coupling to the transactional data base).

In other related aspects, the invention provides a distributed rules processing system, e.g., of the type described above, in which one or more of the coordination modules make the selected rules and/or data available to the selected digital data processor from the other digital data processor in response to a request from that other digital data processor.

In further related aspects of the invention, a request made from the other digital data processor in a distributed rules processing system, e.g., of type described above, is made by a rules engine executing on that other digital data processor.

Continuing the example above (and to illustrate methods according to still further aspects of the invention), in a system according to the foregoing aspects of the invention, a rules engine operating on the first digital data processor can utilize the proxy operating on the second digital data processor to access some transactional data in the data base on the second digital data processor for processing by the rules engine on the first digital data processor (and/or, conversely, to store transactional data processed by that rules engine to that transactional data base). It can also effect, through use of that proxy and/or its counterpart on the first digital data processor, transfer of selected rules to the second digital data processor for execution by its rules engine, e.g., on other data stored (and/or to be stored) in the transactional database.

These and other aspects of the invention are evident in the drawings and in the description that follows.

BRIEF DESCRIPTION OF THE DRAWINGS

A more complete understanding of the invention may be attained by reference to the drawings, in which

FIG. 1 depicts a digital data processing system for distributed rules processing according to one practice of the invention;

FIG. 2 depicts a method of operation of a coordination module in a system of FIG. 1; and

FIG. 3 depicts operation of a coordination module in a system according to the invention within a multi-tenant cloud-based environment.

DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENT

FIG. 1 depicts a digital data processing system 10 for distributed processing in a rules-based system according to one practice of the invention. The illustrated system includes client (or “tenant”) digital data processors 12, 14 that are coupled via network 16 for communication with server digital data processor 18.

The client digital data processors 12, 14 are conventional desktop computers, workstations, minicomputers, laptop computers, tablet computers, PDAs or other digital data processing apparatus of the type that are commercially available in the marketplace and that are suitable for operation in the illustrated system as described herein, all as adapted in accord with the teachings hereof.

The server digital data processor 18 is, likewise, a digital data processing apparatus of the type commercially available in the marketplace suitable for operation in the illustrated system as described herein, as adapted in accord with the teachings hereof. Though the server 18 is typically implemented in a server-class computer, such as a minicomputer, it may also be implemented in a desktop computer, workstation, laptop computer, tablet computer, PDA or other suitable apparatus (again, as adapted in accord with the teachings hereof).

Network 16 comprises one or more networks suitable for supporting communications among and between illustrated digital data processors 12, 14, 18. Illustrated network 16 comprises one or more public networks, specifically, the Internet, though, in other embodiments, it may include (instead or in addition) one or more other networks of the type known in the art, e.g., local area networks (LANs), wide area networks (WANs), metropolitan area networks (MANs), and or Internet(s).

Illustrated client computer 12 comprises central processing, memory, storage and input/output units and other constituent components (not shown) of the type conventional in the art that are configured to form application 12a, transaction database 12b, rules base 12c, and coordination module 12d, in accord with the teachings hereof. One or more of these constituent components, and/or portions thereof, may be absent in various embodiments of the invention. Thus, for example, as suggested by dashed lines, the digital data processor 12 may not include a rules base. Conversely, it may include a portion of a rules base but not transaction database or it may include neither. In other embodiments, it may include a coordination module 12d (described below) but not a transaction database, rules base or an application, all by way of non-limiting example.

The central processing, memory, storage and input/output units of client digital data processor 12 may be configured to form and/or may be supplemented by other elements of the type known in the art desirable or necessary to support elements 12a-12d in accord with the teachings hereof, as well as to support other operations of the digital data processor 12. These can include, by way of non-limiting example, peripheral devices (such as keyboards and monitors), operating systems, database management systems, and network interface cards and software, e.g., for supporting communications between digital data processor 12 and other devices over network 16.

Digital data processor 12 is coupled to network 16 via firewall 12e. This is a conventional device of the type known in the art (as otherwise configured in accord with the teachings hereof) suitable for blocking unauthorized access, yet, permitting authorized access, to the digital data processor 12, including (but not limited to) data and rules bases 12b, 12c.

Firewall 12e, which is constructed and operated in the conventional manner known in the art, may comprise a “hardware” (or stand-alone) firewall and/or it may comprise a software firewall configured from the constituent and/or other components of digital data processor 12, again, in the conventional manner known in the art.

The constituent components of illustrated client digital data processor 14 may similarly be configured in accord with the teachings hereof to form application 14a, transaction database 14b, rules base 14c, and coordination module 14d. As well, they may be supplemented by other elements of the type known in the art desirable or necessary to support elements 14a-14d in accord with the teachings hereof, as well as to support other operations of the digital data processor 14. The client digital data processor 14 may also include a firewall 14e, e.g., constructed and operated like device 12e, discussed above, to block unauthorized access, yet, permit authorized access, to the digital data processor 14, including (but not limited to) data and rules bases 14b, 14c.

Although digital data processors 12 and 14 are depicted and described in like manner here, it will be appreciated that this is for sake of generality and convenience: in other embodiments, these devices may differ in architecture and operation from that shown and described here and/or from each other, all consistent with the teachings hereof. Moreover, it will be appreciated that although only two closely positioned client devices 12, 14 are shown, other embodiments may have greater or fewer numbers of these devices disposed near and/or far from one another, collocated behind one or more common firewalls 12e, 14e or otherwise.

Like client digital data processors 12, 14, server digital data processor 18 comprises central processing, memory, storage and input/output units and other constituent components (not shown) of the type conventional in the art that are configured in accord with the teachings hereof to form rules engine 18a, transaction database 18b, rules base 18c, and coordination module 18d, one or more of which (and/or portions thereof) may be absent in various embodiments of the invention. The digital data processor 18 may also include a firewall 18e, e.g., constructed and operated like device 12e, discussed above, to block unauthorized access, yet, permit authorized access, to the digital data processor 18, including (but not limited to) data and rules bases 18b, 18c.

Although only a single server digital data processor 18 is depicted and described here, it will be appreciated that other embodiments may have greater or fewer numbers of these devices disposed near and/or far from one another, collocated behind one or more common firewalls 18e or otherwise. Indeed, in preferred such embodiments, the digital data processor 18 is configured as a server on a “cloud” platform, e.g., of the type commercially available from Amazon, SalesForce, Google, or other cloud-computing providers. As above, those other servers may differ in architecture and operation from that shown and described here and/or from each other, all consistent with the teachings hereof.

Rules bases 12c, 14c, 18c comprise conventional rules bases of the type known in the art (albeit configured in accord with the teachings hereof) for storing rules (e.g., scripts, logic, controls, instructions, metadata etc.) and other application-related information in tables, database records, database objects, and so forth. Preferred such rules and rules bases are of the type described in the aforementioned incorporated-by-reference U.S. Pat. No. 5,826,250, entitled “Rules Bases and Methods of Access Thereof” and U.S. Pat. No. 7,640,222, entitled “Rules Base Systems and Methods with Circumstance Translation,” though, rules and rules bases that are architected and/or operated differently may be used as well.

As noted above, not all of these rules bases may be present in any given embodiment. Conversely, some embodiments may utilize multiples rules bases, e.g., an enterprise-wide rules base 18c on the server 18 and domain-specific rules bases on the client devices 12, 14, all by way of example. Moreover, to the extent that multiple rules bases are provided in any given embodiment, they may be of like architecture and operation as one another; though, they may be disparate in these regards, as well.

In some embodiments, rules may comprise meta-information structures. These are structures that can include data elements and/or method elements. The latter can be procedural or declarative. In the former regard, for example, such a structure may be procedural insofar as it comprises one or more of a series or ordered steps. In the latter regard, such a structure may be declarative, for example, insofar as it sets forth (declares) a relation between variables, values, and so forth (e.g., a loan rate calculation or a decision-making criterion), or it declares the desired computation and/or result without specifying how the computations should be performed or the result achieved. By way of non-limiting example, the declarative portion of a meta-information structure may declare the desired result of retrieval of a specified value without specifying the data source for the value or a particular query language (e.g., SQL, CQL, .QL etc.) to be used for such retrieval. In other cases, the declarative portion of a meta-information structure may comprise declarative programming language statements (e.g., SQL). Still other types of declarative meta-information structures are possible.

While some rules may comprise meta-information structures that are wholly procedural and others may comprise those that are wholly declarative, the illustrated embodiment also contemplates rules that comprise both procedural and declarative meta-information structures, i.e., rules that have meta-information structure portions that are declarative, as well as meta-information structure portions that are procedural.

Furthermore, rules of the illustrated embodiment that comprise meta-information structures may also reference and/or incorporate other such rules, which themselves may, in turn, reference and/or incorporate still other such rules. As a result, editing such rule may affect one or more rules (if any) that incorporate it.

An advantage of rules that comprise meta-information structures over conventional rules is that they provide users with the flexibility to apply any of code-based and model-driven techniques in the development and modification of software applications and/or computing platforms. Particularly, like models in a model-driven environment, meta-information structures comprise data elements that can be used to define any aspect of a complex system at a higher level of abstraction than source code written in programming languages such as Java or C++. On the other hand, users may also embed programming language statements into meta-information structures if they deem that to be the most efficient design for the system being developed or modified. At run-time, the data elements of the meta-information structures along with programming language statements (if any) are automatically converted into executable code by a rules engine (e.g., 18a).

Thus, in some embodiments, rules may be the primary artifacts that get created, stored (e.g., in a rules base) or otherwise manipulated to define and/or modify the overall functionality of rules-based applications that may automate and/or manage various types of work in different business domains at run-time. By way of non-limiting example, a plurality of rules stored in a rules base (e.g., 12c, 14c, 18c) may be configured to define all aspects (e.g., user interface, decision logic, integration framework, process definition, data model, reports, security settings etc.) of a software application. Such a software application may include specialized software that is used within a specific industry or a business function (e.g., human resources, finance, healthcare, telecommunications etc.), or it may include a cross-industry application (e.g., a project management application), or any other type of software application. As the software application executes on a digital data processor (e.g. any of 12, 14 and 18), any portion of the rules that define the application may be retrieved from a rules bases (e.g. any of 12c, 14c and 18c) and processed/executed (e.g., using a rules engine 18a as defined below) in response to requests/events signaled to and/or detected by the digital data processor at run-time.

Transactional data bases 12b, 14b, 18b comprise conventional data bases of the type known in the art (albeit configured in accord with the teachings hereof) for storing corporate, personal, governmental or other data that may be any of generated, stored, retrieved and otherwise processed (hereinafter, collectively referred to as “processed”) by rules in one or more of the rules bases 12c, 14c, 18c. The data may be financial data, customer records, personal data, run-time data related to an application, or other type of data and it may be stored in tables, database records, database objects, and so forth.

As above, not all of the illustrated transactional data bases may be present in any given embodiment. Conversely, some embodiments may utilize multiple transactional database bases, e.g., an enterprise-wide data base 18b on the server 18 and branch-office specific data bases on the client devices 12, 14, all by way of example. Moreover, to the extent that multiple transactional data bases are provided in any given embodiment, they may be of like architecture and operation as one another; though, they be disparate in these regards, as well.

Illustrated digital data processor 18 also includes rules engine 18a of the type conventionally known in the art (albeit configured in accord with the teachings hereof) for use in processing/executing rules from a rules base in order to process data in (and/or for storage to) a transactional database, e.g., in connection with events signaled to and/or detected by the engine. Preferred such rules engines are of the type described in the aforementioned incorporated-by-reference U.S. Pat. No. 5,826,250, entitled “Rules Bases and Methods of Access Thereof” and U.S. Pat. No. 7,640,222, entitled “Rules Base Systems and Methods with Circumstance Translation” and/or U.S. patent application Ser. No. 11/681,269, filed Mar. 2, 2007, entitled “Proactive Performance Management For Multi-User Enterprise Software Systems,” the teachings too of which are incorporated by reference herein—all as adapted in accord without the teachings hereof.

The rules engine 18a may be implemented in a single software program, in multiple software programs/modules, or a combination of software modules/programs. Moreover, it may comprise programming instructions, scripts, or rules (e.g., rules stored in rules base 18c) and/or a combination thereof.

Though, in the illustrated embodiment in FIG. 1, the rules engine 18a executes on the server 18, in other embodiments, the techniques described herein may be employed to execute the rules engine 18a on or over multiple digital data processors (e.g., 12, 14 and 18). For instance, the rules engine 18a may initially be invoked for execution on a single digital data processor (e.g., 18). Subsequently, portions of it (or, potentially, the entirety of it) may be apportioned, distributed and executed over multiple digital data processors using the techniques described herein.

Such distributed execution of the rules engine can be advantageous, by way of non-limiting example, when execution of an enterprise-wide BPM application necessitates access to sensitive corporate or personal data during intermediate processing steps. For example, in an enterprise with decentralized record-keeping, the rules engine 18a can be utilized to generate a summary report that requires analysis of sensitive personnel-related data maintained in local branch offices. To that end, the engine 18a executes rules for performing preparatory tasks, such as, zeroing out data collection variables and identifying local offices to be queried. The engine 18a also retrieves from rules base 18c or otherwise generate rules that will serve as rules engines (e.g., 12a, 14a) customized or otherwise suited for execution on digital data processing equipment 12, 14 at those offices, as well as rules for execution on those engines 12a, 14a to analyze (and anonymize) sensitive data from the respective offices. Both the rules engine-defining rules and the data analysis rules are distributed to the equipment 12, 14, where they perform these functions and send the requisite information back to server 18 for reporting the BPM application executing there. Such distributed execution has the advantage of permitting the BPM application executing using engine 18a to generate an enterprise-wide report, without necessitating the transmission of sensitive data outside the confines of the local offices.

By way of further example, the rules engine 18a can have two distinct portions, e.g., one that embodies the algorithm for rule selection (e.g., in the manner of the rule finder disclosed in U.S. Pat. No. 5,825,260, assigned to the assignee hereof and incorporated by reference herein), and the other that generates/executes the executable code once the requisite rule has been selected. The rules engine 18a (or other functionality) can apportion and distributed these portions separately as required.

Take, for example, an instance where server 18 gets a request for executing a “loan validation” process for a specific context. Server 18 stores rules for multiple versions of the “loan validation” process for different contexts. However, the server does not have the computing power to execute the ‘rule finder’ algorithm to select the right version and/or the server doesn't have the code generation portion of the engine to execute the selected rule. Server retrieves the rules for all versions of “loan validation” process and transmits them along with the rule selection portion of the engine to a remote digital data processor that has installed thereon the code generation portion of a rules engine. Upon receiving the rule finder portion of the engine along with the rules for all versions, the correct loan validation process is selected and executed on the target digital data processor.

The foregoing are examples those skilled in the art will appreciate that still other ways of implementing/executing the rules engine 18a are possible. By way of non-limiting example, the rules engine 18a may have additional distinct components/portions that can be apportioned and distributed separately. These may include (but are not limited to) a data access component used for processing data during rule execution, a session management component for keeping track of activity across sessions of interaction with a digital data processor and/or a performance monitoring component for monitoring and interacting with various system resources/event logs in order to manage performance thresholds. Still other types of distinct components/portions may be part of the rules engine 18a.

Applications 12a and 14a, of digital data processors 12, 14, respectively, may too comprise rules engines of the type described above, as adapted in accord with the teachings hereof. These applications may be configured (e.g., at least partially using rules stored in a rules base as described above) as stand-alone applications and/or may be embedded in (or coupled) to other software applications, e.g., web browsers. While in some embodiments, such applications 12a, 14a are architected and operated similarly to rules engine 18a, in other embodiments they embody a subset of the functionality of engine 18a, e.g., suited to the processing resources and/or demands of the digital data processors 12, 14 upon which they operate. Instead or in addition, such applications 12a, 14a can comprise other functionality than that provided in rules engine 18a, again, for example, suited to the processing resources and/or demands of the digital data processors 12, 14 upon which they operate.

For sake of simplicity, the discussion that follows focuses on aspects of operation of rules engine 18a; it will be appreciated that other rules engines (e.g., 12a, 14a in certain embodiments) may operate similarly in these regards.

As noted above, rules engine 18a processes/executes rules from a rules base in order to process data in (and/or for storage to) a transactional database. In instances where the engine 18a executes rules from rules base 18c in order to process data in (and/or store data to) database 18b, the engine 18a may operate in the conventional manner known in the art. However, where any of (i) the data to be accessed (or stored) is resident in a data base 12b, 14b of another of the digital data processors, (ii) the rules to be executed (including, potentially, those defining the rules engine 18a or a portion thereof) are contained in a rules base 12c, 14c of another of those digital data processors, and (iii) the rules (again, potentially, those defining the rules engine 18a or a portion thereof) are to be executed using the rules engine 12a, 12b of another of those digital data processors, the rules engine 18a works with one or more of the coordination modules to effect the desired processing. Even in instances where the rules, portions of the engine, and/or data required to effect the desired processing is local to digital data processor 18, the rules engine 18a may work with the coordination modules (e.g., 12d, 14d, 18d) to effect the desired processing over multiple digital data processors (e.g., for access to more computing resources/power) in accord with the teachings hereof.

In this regard, coordination modules 12d, 14d, 18d comprise functionality resident on (and/or coupled to) each of the respective processors 12, 14, 18 that facilitate access to and transfer of rules, the rules engine or portion thereof, or data (and, preferably, all three) between the digital data processors. In this regard, operation of the module(s) 12d, 14d, 18d can include one or more of (i) obviating obstacles presented by firewalls 12e, 14e, 18e or other functionality to such inter-processor accesses and transfers, (ii) effecting such access and transfers, and (iii) querying a digital data processor to determine whether it has resources (e.g., a rules base, a transactional data base, a portion of or the entire rules engine, and/or computing power) to facilitate the completion of a task (e.g., by executing a given one or more rules on a given set of data).

As above, not all of the coordination modules 12d, 14d, 18d are utilized in all embodiments. Conversely, other embodiments may utilize additional such modules, e.g., one module per digital data processor for facilitating rules access/transfer between digital data processors, one module for facilitating transaction access/transfer, and so forth. Likewise, some such modules could be directed to querying digital data processors for resources, while others are directed to access and transfers. These and other such variations are within the ken of those of ordinary skill in the art based on the teachings hereof.

The modules 12d, 14d, 18d may comprise stand-alone functionality stored and executing within each respective digital data processors 12, 14 18. Alternatively, they may comprises functionality that is embedded in the rules engine 18a and/or applications 12a, 14a and/or into other applications or operating system functions resident on the respective devices 12, 14, 18. Moreover, in embodiments that include multiple such modules 12d, 14d, 18d, functionality may be distributed and/or divided among them.

Still further, although the modules 12d, 14d, 18d are shown forming part of the respective digital data processors 12, 14, 18 in the illustrated embodiment, in other embodiments one or more of those modules may execute on still other digital data processors (not shown) that are in communication coupling with the respective processors 12, 14, 18 and that otherwise provide the functionality described here.

Operation of a coordination module 18d in accord with one practice of the invention is illustrated in FIG. 2. It will be appreciated that the sequence of steps shown in that drawing and discussed below is by way of example and that other embodiments may perform the same or different functions using alternate sequences of steps.

In step 20, the module, which may be coupled to a local rules engine 18a, responds to a request for access to a rule by determining if that rule is present in a rules base 18c local to the digital data processor 18—and, for example, it is therefore accessible to a local engine 18a without crossing the firewall 12e, 14e of another digital data processor. The module 18d can make that determination by checking for the presence of the local rules base 18c and/or, if present, by determining if the requested rule itself is present. Alternatively, or in addition, the module 18d can make the determination by checking parameters or other indicators of rule presence, e.g., in the rule request signaled to any of the module 18a and the engine 18a and/or request made by the engine 18a. The parameters or other indicators of rule presence may also be found in a registry of the digital data processor 18 and/or elsewhere.

If the determination of step 20 is in the affirmative, operation proceeds to step 22, where the module 18d determines if data implicated by the rule (e.g., data to be processed by the rule or otherwise necessary for its execution) is present in a data base 18b local to the digital data processor 18—and, again, for example, it is therefore accessible to the local engine 18a without crossing the firewall 12e, 14e of another digital data processor. The module 18d can make that determination by checking for the presence of the local data base 18b and/or, if present, by determining if the requested data are present. Alternatively, or in addition, the module 18d can make the determination by checking parameters or other indicators of data presence, e.g., in the rule request signaled to any of the module 18d and the engine 18a and/or request made by the engine 18a. The parameters or other indicators of rule presence may also be found in a registry of the digital data processor 18 and/or elsewhere.

If the determination of step 22 is affirmative, operation proceeds to step 24, where the module 18d determines if the portion of rules engine (e.g., 18a) that is required to execute the requested rule is present locally on digital data processor 18. To this end, the module can query for local presence on digital data processor 18 of component(s)/module(s) that make up requisite portions(s) of rules engine. In other embodiments, e.g., where those requisite portion(s) are implemented using rules, the module 18d can determine, for example, if those rules are locally present by querying a local database/repository (e.g., rules base 18c, transaction data base 18b). Alternatively, or in addition, the module can check parameters or other indicators of engine presence, e.g., in the rule request signaled to any of the module 18d and the engine 18a and/or request made by the engine 18a, in a registry of the digital data processor 18 and/or elsewhere.

If the determination of step 24 is in the affirmative, operation proceeds to step 26, where the module 18d determines if the rule is to be executed locally, i.e., on digital data processor 18 or whether it is to be executed remotely, e.g., on digital data processors 12, 14. The module 18d can make that determination using a variety of methods including, but not limited to, querying a local rules engine (e.g., 18a) and/or by checking parameters or other indicators, e.g., in the rule request signaled to any of the module 18d and the engine 18a and/or request made by the engine 18a. The parameters or other indicators of rule presence may also be found in a registry of the digital data processor 18 and/or elsewhere. Alternatively, or in addition, the module 18d can make the determination based on load-balancing, network speed and traffic, data coherency or other factors within the ken of those of ordinary skill in the art based on the teachings hereof.

If the determination in steps 20, 22, 24 and 26 is in the affirmative—that is, all resources required to execute the requested rule are present locally at digital data processor 18 and the rule is to be executed there, the determination in step 28 is affirmative and the operation proceeds to step 30, where the module 18d defers to local engine 18a for execution of the requested rule on the required data. The engine 18a (or the required portion thereof) proceeds by accessing the rule and data in the local rules and data bases 18b, 18c, and by executing the rule to process the data accordingly.

If the determination in any of steps 20, 22 and 24 is in the negative—that is, if any of the requested rule, required data and engine (or portion thereof) are not locally present on digital data processor 18, the operation proceeds to step 32, where the module queries one or more other digital data processors (e.g., 12, 14) to determine the location(s) of any of the requested rule, required data and engine (or portion thereof). By way of non-limiting example, the module 18d can determine the location of the requested rule (and corresponding rules base), required data and engine (or portion thereof) by checking parameters or other indicators, e.g., in the rule request signaled to any of the module 18d and the engine 18a and/or request made by the engine 18a, in a registry of the digital data processor 18 and/or elsewhere. Alternatively, or in addition, module 18d can query the digital data processors 12, 14 directly to determine if any of the required/requested resources are maintained by them. Preferably, this is accomplished by communication between module 18d and its counterparts 12d, 14d on each of digital data processors 12, 14—which modules 12d, 14d can, themselves, query the local digital data processor 12, 14 for the requisite resource(s).

If the determination in step 32 is in the negative for any of the required/requested resources, the operation proceeds to and terminates at step 38 where the coordination module returns an error message in response to the requested rule indicating the absence or unavailability of any of the requested rule, required data and engine (or portion thereof).

If the determination in step 32 is in the affirmative for any of the requested/required resources that were not already present locally at digital data processor 18 (as previously determined by steps 20-24), the operation proceeds to step 26 to make the decision of local versus remote execution of the requested rule as described above. If the determination in step 26 is affirmative, at least one of the requested rule, required data and engine (or portion thereof) that is located remotely at another digital data processor (e.g., 12 or 14) as identified in step 32, is retrieved in step 40 before executing the requested rule locally on digital data processor 18 in step 30. As indicated by the callout 46, such retrieval is performed by the module 18d, following a negative determination in step 28 by (i) validating that the one or more digital data processors identified in step 32 (e.g., 12 and/or 14) will grant access to the requested/required resource(s) and, (ii) retrieving that/those resource(s) from those one or more digital data processors (i.e., 12 and/or 14) to digital data processor 18.

In regard to step 40(i), the module 18d can validate that the one or more identified digital data processors will grant access by querying the digital data processor(s) identified in step 32 accordingly. This can be done, for example, through communication with the module 12d, 14d of the identified digital data processor, which module can validate the presence of any of the requested/required resource (if it has not already done so). In some embodiments, the validating module (e.g., 12d or 14d) can open a communications port in the respective digital data processor and can prepare the requested/resource for access via that port.

In regard to step 40(ii), the module 18d retrieves and/or transfers the requested/required resource from the one or more identified digital data processors to digital data processor 18 for local execution. In some embodiments, a local rules engine 18d (if already present) may access the requested/required resource (e.g., data, transaction database, rule and/or rules base) directly from the identified digital data processor, e.g., via a port opened in step 40(i). In other embodiments, the module 18d may also transfer one or more requested/required resources to an identified digital data processor (e.g., 12 or 14) for the requested processing to be performed remotely at the identified digital data processor. Alternatively, or in addition, the module 18d may also notify the identified digital data processor (e.g., 12 or 14) and, preferably, its respective coordination module, identified in step 32, passing to it the relevant information for the requested processing to be performed (e.g., identity of the rule to be executed). The identified digital data processor may perform the requested processing using the resources/information provided to it. In other embodiments where the required resources are not transferred along with the relevant information, the identified digital data processor may perform the requested processing by utilizing the methodology of FIG. 2 itself in order to access the required resources in connection therewith. The discussion that follows provides further details about this step.

Upon completion of step 40, control transfers to any of steps 30, 42 or 44 depending upon the outcome of the previous steps in the operation of the coordination module 18d, as indicated in the drawing. Thus, continuing with the current example of retrieving requested/required resources in step 40 from one or more identified digital data processors (e.g., 12 and/or 14) for local execution at digital data processor 18, control transfers to step 30 to complete the requested processing. However, if the determination in step 26 is in the negative—that is, it is determined that the requested rule is to be executed remotely, then the appropriate location for the completion of such remote processing is based upon a combination of steps 34-44 as well as the outcome of previous steps 20-24.

By way of non-limiting example, despite the local presence of all the requested/required resources on digital data processor 18 (i.e., affirmative responses in steps 20-24), a negative determination in step 26 may be due to parameters or other indicators in a local registry of the digital data processor 18 and/or e.g., in the rule request signaled to any of the module 18d and the engine 18a and/or request made by the engine 18a. In this case, there is no previously identified location from step 32. Thus, the response to step 34 is in the negative and the operation proceeds to step 36 where the module 18d determines if there is another digital data processor (e.g., 12, 14) suited for executing the requested rule. In some embodiments, it makes that determination by querying the local rules engine 18a and/or by checking the parameters or other indicators as mentioned above. Alternatively, or in addition, the module 18d can make the determination based on load-balancing, network speed and traffic, availability of the required/requested resources (or portions thereof) on one or more other digital data processors, data coherency or other factors within the ken of those of ordinary skill in the art based on the teachings hereof. For example, a query during the operation at step 36 (or at a prior step) may reveal that an alternative digital data processor with higher computing power than processor 18 and/or another digital data processor identified in a local registry, has all of the required/requested resources. In that case, module 18d may simply notify the alternative digital data processor (and/or its coordination module) to perform the requested processing as opposed to performing it locally or remotely at the other digital data processor that was identified in the local registry. More generally, this example is also reflective of some embodiments discussed throughout this document that may involve scenarios and/or steps where duplicate versions, or at least versions that are comparable in terms of functionality, of one or more requested/required resources may exist at multiple locations

If the determination in step 36 is negative, the operation proceeds to and terminates at step 38 in the illustrated embodiment where the coordination module 18d returns an error message indicating the absence or unavailability of a suitable digital data processor for remote execution of the requested processing/rule. In other embodiments, if the requested/required resources are present locally, the coordination module may ignore the negative outcome of step 26 and execute the requested rule locally as default if a suitable remote digital data processor (e.g., 12 and 14) is not identified in step 36.

If the determination in step 36 is in the affirmative, coordination module 18d any of transfers the requested/required resources from the digital data processor 18 and/or provides the relevant information to the other identified digital data processor in step 36 e.g., by employing the methodology discussed above in connection with steps 40(i) and (ii). In some embodiments, coordination module 18d may only transfer a portion of the requested/required resources if it is determined (as mentioned above) that another identified digital data processor(s) already possesses the remaining portion of the requested/required resources. Once any such transfer and/or notification is completed in step 40 from digital data processor 18, processing is completed by executing the requested rule remotely in step 44 at the other digital data processor that is identified in step 36.

Preceding the negative determination in step 26, a negative outcome in any of steps 20-24 indicates that at least one of the requested/required resources is not locally present on digital data processor 18 and that one or more digital data processors (e.g., 12 and/or 14) may have been identified in step 32 to locate such requested/required resource as previously discussed. In situations where (i) at least one but not all of the determinations in steps 20-24 are in the affirmative, (ii) the determination in step 26 is in the negative, and (iii) the determination in step 34 is affirmative, the operation proceeds to remotely execute the requested rule. If only one digital data processor was identified in step 32, then module 18d transfers the portion of the requested/required resources at digital data processor 18 (e.g., by employing the methodology discussed above in connection with steps 40(i) and (ii)) to the single digital data processor identified in step 32, where the remaining requested/required resources are located. Once that transfer is completed in step 40, the requested rule/processing is performed remotely in step 42 at the location identified in step 32.

In other embodiments, two or more locations may be identified in step 32 e.g., the required data may be located at digital data processor 12 and the engine may be located at digital data processor 14. In such embodiments, where the step 26 response is negative and the step 34 response is affirmative, module 18d may prioritize all available location options based upon various factors including, but not limited to, prioritization criteria specified in the rule request signaled to module 18d and/or the engine 18a, prioritization rules stored in rules base 18c and/or elsewhere on digital data processor 18. Alternatively, or in addition, module 18d may prioritize all available location options based upon the relative computing resources (e.g., CPU, memory etc.) at each location, network traffic or any other factors within the ken of those of ordinary skill in the art based on the teachings hereof. In any event, module 18d will transfer the portion of the requested/required resources from digital data processor 18 to the highest priority location and once the transfer(s) is completed in step 40, the requested rule/processing is performed remotely at that location in step 42.

A negative determination in step 26 may be followed by a negative determination in step 34. Following the combined negative determinations, an attempt is made via step 36 (as described above) to identify one or more digital data processors other than local processor 18 or the one or more digital data processors identified in step 32.

By way of example, a request may be signaled to coordination module 18d to execute one or more rules that define a plurality of reports. These report rules may be stored locally in rules base 18c and the rules engine 18a required to execute the requested report rules may also be locally present on server digital data processor 18. However, the determination in step 22 may be in the negative because the data to be processed by the requested report rules is not locally present. In such an instance, operation proceeds to step 32 where the coordination module 18d attempts to locate the one or more digital data processors that maintain the required data for reports execution. In one embodiment, the coordination module 18d identifies the location of such digital data processors (e.g., 12, 14) by querying a local registry on digital data processor 18 using parameters or other indicators of data location specified in the rule request that was signaled to the coordination module 18d. The query of the local registry may, for example, reveal that a portion of the required data is located in the transaction database 12b on digital data processor 12 and the remaining portion of the required data is located in the transaction database 14b on digital data processor 14. Next, operation proceeds to step 26 where it may be determined, for example, that the reports will not be executed locally at digital data processor 18 because a pre-requisite for such local execution is data retrieval from digital data processors 12 and 14 over a very slow network connection (e.g., 16). In such an instance, a negative outcome of step 26 is followed by a determination in step 34 of whether to execute the requested report rules remotely on digital data processor 12, 14 or at both locations. This determination may be based on various factors including, but not limited to, load balancing and the correlation between the requested report rules and the required data for requested rule execution at each location. Thus, for example, if CPU speed is sufficient for both digital data processors 12, 14 (e.g., as determined by the registry query mentioned above) and the requested rules can be apportioned to be separately executed at both locations, the operation may proceed through steps 34, 40 and 42 such that the respective portions of the report rules along with the required engine 18a may be transferred appropriately to digital data processors 12 and 14 for remote execution. Alternatively, the determination in step 34 may be that the requested report rules cannot be independently executed at different locations. In that case, the required data and/or transaction data base (e.g., 12b, 14b) is retrieved and/or transferred, along with the requested report rules and engine 18a, to a single digital data processor for execution. In that case, the transfer destination may, for example, be determined based upon a higher CPU speed or any other factor.

As previously mentioned, a retrieval and/or transfer of rules, engine or data between digital data processors 12, 14 and 18 can be accomplished by employing the methodology discussed above in connection with steps 40(i) and 40(ii). Thus, for example, after the determination in step 34 is in the affirmative, the location(s) of the required data for the report rules may be validated (if not already done) through communication between coordination modules 12d, 14d and 18d. In some embodiments, the validating module (e.g., 12d or 14d) can open a communications port in the respective digital data processor and can prepare the required data for access via that port.

Once the ports are opened, the digital data processors 12, 14 and 18 can freely communicate information among each other in step 40(ii). Thus, if it has been determined that digital data processor 12 is to execute the requested report rules, module 18d retrieves the required data portion and/or transaction data base 14b from digital data processor 14 and transfers it to the digital data processor 12. Furthermore, the requested report rules and the required engine 18a are transferred from digital data processor 18 to the target digital data processor 12. Once such retrieval and transfer is completed, the requested report rules are executed in step 42.

In some embodiments, local registries, files or databases (e.g., 12b, 12c, 14b, 14c, 18b, 18c) on any of digital data processors 12, 14 and 18 are updated following the retrieval and/or transfer of rules, rule bases, engine (or any portion thereof), data and transaction data bases from/to such digital data processors. This allows digital data processors 12, 14 and 18 to handle future requests for rule execution accurately and/or efficiently. By way of illustration, once the requested report rules and engine 18a are transferred from digital data processor 18 to digital data processor 12 in the example above, the local registries on any of digital data processors 12, 18 can be updated to reflect such transfer. The operation of coordination module 18d is adjusted accordingly to respond to subsequent requests for execution of those report rules that are any of signaled to and received by the module and/or digital data processor 18.

It will be appreciated that the illustrated embodiment of the operation of coordination module 18d in FIG. 2 is merely exemplary and that certain steps may be omitted, modified or re-ordered without departing from the scope of the disclosure herein. In some embodiments, for example, any of the modules 12d, 14d and 18d may be configured differently based on the business and/or technical requirements that drive the use of the techniques and systems described herein.

By way of non-limiting example, the systems and techniques described herein may be used for provisioning a computing platform as a service (e.g., commercially available Platform-as-a-Service or “PaaS” offerings) over the internet to multiple concurrent users (e.g., from different companies or “tenant” organizations) for application development, testing and/or deployment in a way that provides more flexibility and ease of use without sacrificing data security as compared to the conventional technology/tools available on the market today.

In one such embodiment, the server 18 depicted in FIG. 1 is configured as a cloud-based computing platform comprising hardware and software components (e.g., business process management software) that are used by users 11 from one or more tenant organizations over the internet (e.g., network 16) to develop, test and/or deploy their enterprise applications. Such shared use of resources among multiple tenant organizations on a cloud-based server (e.g., 18) allows each respective tenant to quickly develop, test and deploy their applications while avoiding the cost and complexity of buying the underlying hardware and software components and hosting them in their own data centers.

Despite its many benefits, the multi-tenant architecture have traditionally presented significant challenges related to data security and integration between cloud-based application(s) and the legacy systems/resources located within each of the respective tenants' data centers. These challenges are exacerbated by the business need of many of the tenant organizations who want to take the hybrid approach of leveraging a cloud-based platform (e.g., server 18) to develop/test their application(s) and eventually migrating them for deployment within the respective data centers, and vice versa. Given the prior state of the technology, one major drawback of this hybrid approach is that the integration configuration of the tenant application(s) with respect to other applications and/or systems (e.g., data bases) has to be updated each time the tenant application is migrated in/out of the tenant data center.

Thus, for example, enterprise software applications are typically developed and tested by tenants on a server by creating and/or modifying a plurality rules that may be stored in a rules base present on the server. These rules can define all aspects of such tenant applications including their integration with other applications and/or systems, some of which may be located behind tenant firewalls in the tenant's data center. Thus, in order to enable communication between a tenant application on server and other applications, systems and/or functionality located behind tenant firewalls (hereinafter collectively referred to as “tenant legacy systems”), the integration rules for the tenant applications might attempt to obviate the obstacles presented by the firewalls, e.g., by opening multiple ports in the tenant firewall depending upon, e.g., the integration method (e.g., SOAP, .NET, JAVA, EJB etc.) and/or the type of tenant legacy system (e.g., SQL database, web service etc.) that is being linked to the tenant application. If that same tenant application is then subsequently deployed within the tenant's data center (i.e., within the tenant firewalls), the integration rules for that application must be reconfigured to establish the direct link between the tenant application and the tenant legacy systems without any intermediate firewall.

Similarly, a tenant may develop and test its application within its data center before migrating it outside its firewall for deployment on a cloud-server. At run-time of the application, a rules engine on the server might execute one or more of the plurality of rules that define the application in response to requests/events received by server e.g., from users within a tenant data center. The data processed during run-time by such rules could potentially either be stored in the database local to the server or it may be stored in remote tenant data bases that may not be accessible to server (e.g., due to firewalls) to effect the desired processing. In such a system, the conventional prior art approach would require that the integration rules of the tenant application be reconfigured upon migration of the tenant application to the cloud-based server in order to avoid any errors/interruptions during execution of such tenant applications on cloud-based servers due to inaccessibility of the required data and/or other resources.

Systems and techniques described herein overcome these drawbacks, for example, when configured as described below, by allowing tenant organizations to simulate their data center environment on an external cloud-based infrastructure (e.g., server 18), thus obviating the need to reconfigure the integration framework of the tenant application(s) upon migration.

This and other benefits of the systems and techniques described herein become apparent in embodiments of the type illustrated FIG. 1 which are configured such that digital data processor 18 operates as a server on a “cloud” platform, e.g., of the type commercially available from Amazon, SalesForce, Google, or other cloud-computing providers. In such embodiments, digital data processors 12 and 14 can be, for example, different “tenant” digital data processors that are in communication with the server 18 over network(s) to access resources (e.g., rules, applications, modules, database, rules bases, data, code, scripts, hardware etc.) that may be either “generic” (i.e., available to users/systems associated with all tenants that are able to connect to the server 18) or “tenant-specific” (i.e., only accessible to users/systems associated with a particular tenant).

As a departure from the conventional approach mentioned above, embodiments of the invention configured as described herein allow tenant organizations to build seamless integration between their enterprise application(s) and the tenant legacy systems without having to reconfigure the application(s) multiple times depending upon where the application(s) is developed, tested and/or deployed. This is accomplished by establishing communication between coordination modules that are installed on the cloud-based server (or wherever the application is developed, tested and/or deployed outside the tenant firewall) as well as within each tenant's data center.

Accordingly, for example, the first time a user 11 signals/sends a request (e.g., HTTP request or otherwise) using digital data processor (e.g., 12, 14, 11a) to access any of the resources that are located on the server 18, any of the coordination module 18d and engine 18a first authenticate the user by e.g., matching parameters or other indicators of user identification in the request with data related to authorized tenant users previously stored in any of the local data bases (e.g., 18b, 18c), registries, files and elsewhere. If the user is authenticated/verified as an authorized user who is able to access resources on server 18 on behalf of a tenant organization, a coordination module (e.g., 12d, 14d) can be transmitted back in response to their initial request. The coordination module that is transmitted back (e.g., 12d, 14d) may be installed on any digital data processor (e.g., 12, 14, 11a) located behind/protected by the firewall of the tenant organization that the user is associated with. In one embodiment, the coordination module transmitted back to the authorized tenant user may be installed in the web browser of the digital data processor being used by the tenant user to communicate with server 18. Upon installation, the coordination module (e.g., 12d, 14d) may prompt the user to provide information related to the tenant legacy systems that may need to be integrated with the tenant application(s) on server 18. This information is then be transmitted to server 18 where it is stored in any of the local data bases (e.g., 18b, 18c), registries, files and elsewhere.

Thus, when an authorized user starts to develop and test applications on server 18 on behalf of tenant organizations and stores the legacy system information for that tenant on the server 18, any of the authorized developers associated with that tenant organization can configure integration rules for tenant application(s) on server 18 in exactly the same way as if they were developing the integration rules on a digital data processor located within that tenant's data center. Similarly, even if the integration rules were first built within that tenant's data center and then later migrated to server 18, the legacy system information on server 18 coupled with the communication between coordination module 18d and the coordination module located within the tenant's data center (e.g., 12d, 14d) obviate the need to reconfigure the integration rules to maintain the integration links that are defined by such rules.

FIG. 3 illustrates operation of an embodiment of the invention and particularly, for example, operation of module 18d on digital data processor 18 at run-time within a multi-tenant cloud-based environment as described above. In one such embodiment, the module 18d can be configured to omit a few steps and simplify its operation as compared to the illustrated embodiment in FIG. 2. Namely steps 20, 24, 34, 36 and 44 of the operation depicted in FIG. 2 are omitted in here FIG. 3 for various reasons. For example, the platform-as-a-service business model by its very nature typically requires that the service provider (e.g., salesforce.com, Google etc.) provision all required hardware and software components to its tenants. Thus, the coordination module may not need to verify the local presence of all required portions of rules engine 18a in step 24. In fact, all requests by users for rule execution on behalf of the tenants at run-time may initially be signaled to and/or received by engine 18a. After the authentication/verification process (as described above) for the user making the request, the rules engine 18a may also verify the availability of the requested rule before working with module 18d to access the required data in step 22. Therefore, even though rules engine 18a may employ the same techniques as step 20 illustrated in FIG. 2 to verify the local presence of the requested rule, that step 20 can be omitted from the simplified operation of module 18d in the illustrated embodiment.

Once the verification process related to user authentication and local rule presence is completed, coordination module 18d will respond to requests for data access in substantially the same way as described previously in connection with FIG. 2. That is, the module 18d will verify the local presence of the required data on server 18 using the techniques discussed previously. If locally present, the module will typically defer to the local engine 18a for local execution the rule on the required data through steps 26-30. On the other hand, if the required data and/or database are not locally present, the coordination module 18d will attempt to identify the location of the required data in step 32 by checking any of the tenant information, user information, parameters and other indicators of data presence e.g., in the request signaled to the engine 18a and/or the module 18d. In addition or instead, the module 18d may also check integration rules (if any) that may be referenced by or otherwise related to the requested rule and/or the tenant legacy system information that may be found in a registry, data base or elsewhere on the digital data processor 18. Once the appropriate tenant location of the required data is identified in step 32, the coordination module 18d may retrieve the data for local execution of the requested rule(s) and/or transfer the requested rule(s) to the identified tenant location for execution, all as previously discussed in connection with steps 26-30 and 40, 42 and 46 depicted FIG. 2.

It will be appreciated that while effecting any of notifications, transfers and retrieval of data and/or rules in step 40 of the illustrated embodiment, the coordination module 18d may only open a single port in the tenant firewall. That is a more secure approach than opening multiple ports (e.g., based on integration methods etc.) as required by the conventional approach described above.

It will be appreciated that steps 34, 36 and 44 from the operation depicted in FIG. 2 are also omitted from the illustrated embodiment in FIG. 3 for the sake of simplicity. It is entirely possible, for example, that the identified tenant location (e.g., digital data processor 12) in step 32 may not have the required resources (e.g., computing power and/or rules engine) for the requested rule(s) execution. In that case, the coordination module 18d may work with the local coordination module (e.g., 12d) at the appropriate tenant data center to identify one or more other digital data processors with the necessary resources within that data center for the execution of the requested rules(s).

Described above are systems and methods meeting the foregoing objects. It will be appreciated that the embodiment illustrated and described herein are merely examples of the invention and other embodiments incorporating changes thereto fall within the scope thereof.

Claims

1. A distributed processing system comprising:

a server digital data processor coupled to a rules base that stores a plurality of rules that define an application, wherein the server digital data processor operates on a cloud platform,
an integration link used for communication of one or more data between the application and a tenant legacy system during execution of the application, wherein at least one integration rule among the plurality of rules defines the integration link, and wherein the tenant legacy system comprises at least one of a database and a web service that is communicatively coupled to the server digital data processor,
one or more coordination modules associated with a respective one of the server digital data processor and the tenant legacy system that facilitate the communication between the tenant legacy system and the application in accordance with the integration rule and other tenant legacy system information accessible to the server digital data processor, and
a firewall that is coupled to the one or more networks and that interrupts the integration link between the application and the tenant legacy system, absent intervention of the one or more coordination modules and the other tenant legacy system information accessible to the server digital data processor,
wherein a tenant data center environment is simulated such that the one or more coordination modules and the tenant legacy system information accessible to the server digital data processor obviate a need to reconfigure the integration rule, so as to maintain the integration link regardless of execution of the application on the server digital data processor or a tenant digital data processor, and wherein the tenant legacy system is directly accessible to the tenant digital processor without the firewall preventing such access.

2. The system of claim 1, wherein the one or more coordination modules facilitate the communication between the tenant legacy system and the application by making available the tenant legacy information to the server digital data processor in response to a request from a rules engine executing on at least one of the server digital data processor and the tenant digital data processor.

3. The system of claim 2, wherein the one or more coordination modules make available the tenant legacy information to the server digital data processor by opening one or more communications ports on the firewall.

4. The system of claim 3, wherein the one or more coordination modules make available the tenant legacy information to the tenant digital data processor by opening a single communications port on the firewall and obviate a need to open a plurality of communications ports on the firewall.

5. The system of claim 1, wherein the one or more coordination modules retrieve the tenant legacy information for use in execution of one or more selected rules from the rules base coupled to the server digital data processor.

6. The system of claim 1, wherein the one or more coordination modules transfer one or more selected rules from the rules base coupled to the server digital data processor for execution on the tenant digital data processor.

7. The system of claim 1, wherein the at least one integration rule defines the integration link according to at least one of a Simple Object Access Protocol (SOAP) integration method, a.NET integration method, a Java integration method, and an Enterprise Java Beans (EJB) integration method.

8. The system of claim 1, wherein the server digital data processor transmits the one or more coordination modules to the tenant digital data processor in response to a request to access one or more resources available to the server digital data processor.

9. The system of claim 8, wherein the server digital data processor installs the one or more transmitted coordination modules in a web browser executing on the tenant digital data processor.

10. A method of distributed rules processing, the method comprising:

coupling a server digital data processor to a rules base that stores a plurality of rules that define an application, wherein the server digital data processor operates on a cloud platform,
defining an integration link for communication of one or more data between the application and a tenant legacy system during execution of the application, wherein at least one integration rule among the plurality of rules defines the integration link, and wherein the tenant legacy system comprises at least one of a database and a web service that is communicatively coupled to the server digital data processor,
facilitating the communication between the tenant legacy system and the application, via one or more coordination modules associated with a respective one of the server digital data processor and the tenant legacy system, in accordance with the integration rule and other tenant legacy system information accessible to the server digital data processor, and
simulating a tenant data center environment such that the one or more coordination modules and the tenant legacy system information accessible to the server digital data processor obviate a need to reconfigure the integration rule, so as to maintain the integration link regardless of execution of the application on the server digital data processor or a tenant digital data processor, wherein the tenant legacy system is directly accessible to the tenant digital processor without a firewall preventing such access, and
wherein the firewall is coupled to the one or more networks and interrupts the integration link between the application and the tenant legacy system, absent intervention of the one or more coordination modules and the other tenant legacy system information accessible to the server digital data processor.

11. The method of claim 10, wherein the facilitating the communication between the tenant legacy system and the application comprises making available, via the one or more communication modules, the tenant legacy information to the server digital data processor in response to a request from a rules engine executing on at least one of the server digital data processor and the tenant digital data processor.

12. The method of claim 11, wherein the making available the tenant legacy information to the server digital data processor comprises opening one or more communications ports on the firewall.

13. The method of claim 12, wherein the making available the tenant legacy information to the tenant digital data processor comprises opening a single communications port on the firewall so as to obviate a need to open a plurality of communications ports on the firewall.

14. The method of claim 10, further comprising retrieving the tenant legacy information, via the one or more coordination modules, for use in execution of one or more selected rules from the rules base coupled to the server digital data processor.

15. The method of claim 10, further comprising transferring one or more selected rules from the rules base coupled to the server digital data processor, via the one or more coordination modules, for execution on the tenant digital data processor.

16. The method of claim 10, wherein the at least one integration rule defines the integration link according to at least one of a Simple Object Access Protocol (SOAP) integration method, a.NET integration method, a Java integration method, and an Enterprise Java Beans (EJB) integration method.

17. The method of claim 10, further comprising transmitting the one or more coordination modules from the server digital data processor to the tenant digital data processor, in response to a request to access one or more resources available to the server digital data processor.

18. The method of claim 17, further comprising installing the one or more transmitted coordination modules, via the server digital data processor, in a web browser executing on the tenant digital data processor.

Referenced Cited
U.S. Patent Documents
4047059 September 6, 1977 Rosenthal
4344142 August 10, 1982 Diehr, II et al.
4602168 July 22, 1986 Single
4607232 August 19, 1986 Gill, Jr.
4659944 April 21, 1987 Miller, Sr. et al.
4701130 October 20, 1987 Whitney et al.
4866634 September 12, 1989 Reboh et al.
4884217 November 28, 1989 Skeirik et al.
4895518 January 23, 1990 Arnold et al.
4930071 May 29, 1990 Tou et al.
4953106 August 28, 1990 Gansner et al.
5062060 October 29, 1991 Kolnick
5077491 December 31, 1991 Heck et al.
5093794 March 3, 1992 Howie et al.
5119465 June 2, 1992 Jack et al.
5129043 July 7, 1992 Yue
5136184 August 4, 1992 Deevy
5136523 August 4, 1992 Landers
5140671 August 18, 1992 Hayes et al.
5193056 March 9, 1993 Boes
5199068 March 30, 1993 Cox
5204939 April 20, 1993 Yamazaki et al.
5228116 July 13, 1993 Harris et al.
5259766 November 9, 1993 Sack et al.
5262941 November 16, 1993 Saladin et al.
5267175 November 30, 1993 Hooper
5267865 December 7, 1993 Lee et al.
5270920 December 14, 1993 Pearse et al.
5276359 January 4, 1994 Chiang
5276885 January 4, 1994 Milnes et al.
5291394 March 1, 1994 Chapman
5291583 March 1, 1994 Bapat
5295256 March 15, 1994 Bapat
5297279 March 22, 1994 Bannon et al.
5301270 April 5, 1994 Steinberg et al.
5310349 May 10, 1994 Daniels et al.
5311422 May 10, 1994 Loftin et al.
5326270 July 5, 1994 Ostby et al.
5333254 July 26, 1994 Robertson
5339390 August 16, 1994 Robertson et al.
5374932 December 20, 1994 Wyschogrod et al.
5379366 January 3, 1995 Noyes
5379387 January 3, 1995 Carlstedt et al.
5381332 January 10, 1995 Wood
5386559 January 31, 1995 Eisenberg et al.
5395243 March 7, 1995 Lubin et al.
5412756 May 2, 1995 Bauman et al.
5421011 May 30, 1995 Camillone et al.
5421730 June 6, 1995 Lasker, III et al.
5446397 August 29, 1995 Yotsuyanagi
5446885 August 29, 1995 Moore et al.
5450480 September 12, 1995 Man et al.
5463682 October 31, 1995 Fisher et al.
5473732 December 5, 1995 Chang
5477170 December 19, 1995 Yotsuyanagi
5481647 January 2, 1996 Brody et al.
5499293 March 12, 1996 Behram et al.
5504879 April 2, 1996 Eisenberg et al.
5512849 April 30, 1996 Wong
5519618 May 21, 1996 Kastner et al.
5537590 July 16, 1996 Amado
5542024 July 30, 1996 Balint et al.
5542078 July 30, 1996 Martel et al.
5548506 August 20, 1996 Srinivasan
5561740 October 1, 1996 Barrett et al.
5579223 November 26, 1996 Raman
5579486 November 26, 1996 Oprescu et al.
5586311 December 17, 1996 Davies et al.
5596752 January 21, 1997 Knudsen et al.
5597312 January 28, 1997 Bloom et al.
5608789 March 4, 1997 Fisher et al.
5611076 March 11, 1997 Durflinger et al.
5627979 May 6, 1997 Chang et al.
5630127 May 13, 1997 Moore et al.
5649192 July 15, 1997 Stucky
5655118 August 5, 1997 Heindel et al.
5664206 September 2, 1997 Murow et al.
5675753 October 7, 1997 Hansen et al.
5678039 October 14, 1997 Hinks et al.
5689663 November 18, 1997 Williams
5715450 February 3, 1998 Ambrose et al.
5732192 March 24, 1998 Malin et al.
5754740 May 19, 1998 Fukuoka et al.
5761063 June 2, 1998 Jannette et al.
5761673 June 2, 1998 Bookman et al.
5765140 June 9, 1998 Knudson et al.
5768480 June 16, 1998 Crawford, Jr. et al.
5788504 August 4, 1998 Rice et al.
5795155 August 18, 1998 Morrel-Samuels
5809212 September 15, 1998 Shasha
5815415 September 29, 1998 Bentley et al.
5819257 October 6, 1998 Monge et al.
5822780 October 13, 1998 Schutzman
5825260 October 20, 1998 Ludwig et al.
5826077 October 20, 1998 Blakeley et al.
5826239 October 20, 1998 Du et al.
5826250 October 20, 1998 Trefler
5826252 October 20, 1998 Wolters, Jr. et al.
5829983 November 3, 1998 Koyama et al.
5831607 November 3, 1998 Brooks
5832483 November 3, 1998 Barker
5841435 November 24, 1998 Dauerer et al.
5841673 November 24, 1998 Kobayashi et al.
5864865 January 26, 1999 Lakis
5873096 February 16, 1999 Lim et al.
5875334 February 23, 1999 Chow et al.
5875441 February 23, 1999 Nakatsuyama et al.
5880614 March 9, 1999 Zinke et al.
5880742 March 9, 1999 Rao et al.
5886546 March 23, 1999 Hwang
5890146 March 30, 1999 Wavish et al.
5890166 March 30, 1999 Eisenberg et al.
5892512 April 6, 1999 Donnelly et al.
5907490 May 25, 1999 Oliver
5907837 May 25, 1999 Ferrel et al.
5910748 June 8, 1999 Reffay et al.
5911138 June 8, 1999 Li et al.
5918222 June 29, 1999 Fukui et al.
5920717 July 6, 1999 Noda
5930795 July 27, 1999 Chen et al.
5945852 August 31, 1999 Kosiec
5974441 October 26, 1999 Rogers et al.
5974443 October 26, 1999 Jeske
5978566 November 2, 1999 Plank et al.
5983267 November 9, 1999 Shklar et al.
5987415 November 16, 1999 Breese et al.
5990742 November 23, 1999 Suzuki
5995948 November 30, 1999 Whitford et al.
5995958 November 30, 1999 Xu
6008673 December 28, 1999 Glass et al.
6008808 December 28, 1999 Almeida et al.
6012098 January 4, 2000 Bayeh et al.
6020768 February 1, 2000 Lim
6023704 February 8, 2000 Gerard et al.
6023714 February 8, 2000 Hill et al.
6023717 February 8, 2000 Argyroudis
6028457 February 22, 2000 Tihanyi
6037890 March 14, 2000 Glass et al.
6044373 March 28, 2000 Gladney et al.
6044466 March 28, 2000 Anand et al.
6078982 June 20, 2000 Du et al.
6085188 July 4, 2000 Bachmann et al.
6085198 July 4, 2000 Skinner et al.
6091226 July 18, 2000 Amano
6092036 July 18, 2000 Hamann
6092083 July 18, 2000 Brodersen et al.
6094652 July 25, 2000 Faisal
6098172 August 1, 2000 Coss et al.
6105035 August 15, 2000 Monge et al.
6108004 August 22, 2000 Medl
6122632 September 19, 2000 Botts et al.
6125363 September 26, 2000 Buzzeo et al.
6130679 October 10, 2000 Chen et al.
6137797 October 24, 2000 Bass et al.
6144997 November 7, 2000 Lamming et al.
6151595 November 21, 2000 Pirolli et al.
6151624 November 21, 2000 Teare et al.
6154738 November 28, 2000 Call
6167441 December 26, 2000 Himmel
6177932 January 23, 2001 Galdes et al.
6185516 February 6, 2001 Hardin et al.
6185534 February 6, 2001 Breese et al.
6192371 February 20, 2001 Schultz
6194919 February 27, 2001 Park
6212502 April 3, 2001 Ball et al.
6216135 April 10, 2001 Brodersen et al.
6233332 May 15, 2001 Anderson et al.
6233617 May 15, 2001 Rothwein et al.
6240417 May 29, 2001 Eastwick et al.
6243713 June 5, 2001 Nelson et al.
6246320 June 12, 2001 Monroe
6275073 August 14, 2001 Tokuhiro
6275790 August 14, 2001 Yamamoto et al.
6281896 August 28, 2001 Alimpich et al.
6282547 August 28, 2001 Hirsch
6300947 October 9, 2001 Kanevsky
6304259 October 16, 2001 DeStefano
6308163 October 23, 2001 Du et al.
6313834 November 6, 2001 Lau et al.
6314415 November 6, 2001 Mukherjee
6324693 November 27, 2001 Brodersen et al.
6330554 December 11, 2001 Altschuler et al.
6338074 January 8, 2002 Poindexter et al.
6341277 January 22, 2002 Coden et al.
6341293 January 22, 2002 Hennessey
6344862 February 5, 2002 Williams et al.
6349238 February 19, 2002 Gabbita et al.
6351734 February 26, 2002 Lautzenheiser et al.
6356286 March 12, 2002 Lawrence
6359633 March 19, 2002 Balasubramaniam et al.
6366299 April 2, 2002 Lanning et al.
6369819 April 9, 2002 Pitkow et al.
6370537 April 9, 2002 Gilbert et al.
6380910 April 30, 2002 Moustakas et al.
6380947 April 30, 2002 Stead
6381738 April 30, 2002 Choi et al.
6389460 May 14, 2002 Stewart et al.
6389510 May 14, 2002 Chen et al.
6393605 May 21, 2002 Loomans
6396885 May 28, 2002 Ding et al.
6405211 June 11, 2002 Sokol et al.
6405251 June 11, 2002 Bullard et al.
6415259 July 2, 2002 Wolfinger et al.
6415283 July 2, 2002 Conklin
6418448 July 9, 2002 Sarkar
6421571 July 16, 2002 Spriggs et al.
6426723 July 30, 2002 Smith et al.
6429870 August 6, 2002 Chen et al.
6430571 August 6, 2002 Doan et al.
6430574 August 6, 2002 Stead
6437799 August 20, 2002 Shinomi et al.
6446065 September 3, 2002 Nishioka et al.
6446089 September 3, 2002 Brodersen et al.
6446200 September 3, 2002 Ball et al.
6446256 September 3, 2002 Hyman et al.
6448964 September 10, 2002 Isaacs et al.
6453038 September 17, 2002 McFarlane et al.
6463346 October 8, 2002 Flockhart et al.
6463440 October 8, 2002 Hind et al.
6469715 October 22, 2002 Carter et al.
6469716 October 22, 2002 Carter et al.
6473467 October 29, 2002 Wallace et al.
6473748 October 29, 2002 Archer
6493331 December 10, 2002 Walton et al.
6493399 December 10, 2002 Xia et al.
6493731 December 10, 2002 Jones et al.
6493754 December 10, 2002 Rosborough et al.
6496812 December 17, 2002 Campaigne et al.
6496833 December 17, 2002 Goldberg et al.
6502239 December 31, 2002 Zgarba et al.
6509898 January 21, 2003 Chi et al.
6513018 January 28, 2003 Culhane
6526440 February 25, 2003 Bharat
6526457 February 25, 2003 Birze
6529217 March 4, 2003 Maguire, III et al.
6529899 March 4, 2003 Kraft et al.
6529900 March 4, 2003 Patterson et al.
6530079 March 4, 2003 Choi et al.
6532474 March 11, 2003 Iwamoto et al.
6539374 March 25, 2003 Jung
6542912 April 1, 2003 Meltzer et al.
6546381 April 8, 2003 Subramanian et al.
6546406 April 8, 2003 DeRose et al.
6549904 April 15, 2003 Ortega et al.
6556226 April 29, 2003 Gould et al.
6556983 April 29, 2003 Altschuler et al.
6556985 April 29, 2003 Karch
6559864 May 6, 2003 Olin
6560592 May 6, 2003 Reid et al.
6560649 May 6, 2003 Mullen et al.
6567419 May 20, 2003 Yarlagadda
6571222 May 27, 2003 Matsumoto et al.
6577769 June 10, 2003 Kenyon et al.
6583800 June 24, 2003 Ridgley et al.
6584464 June 24, 2003 Warthen
6584569 June 24, 2003 Reshef et al.
6594662 July 15, 2003 Sieffert et al.
6597381 July 22, 2003 Eskridge et al.
6597775 July 22, 2003 Lawyer et al.
6598043 July 22, 2003 Baclawski
6606613 August 12, 2003 Altschuler et al.
6625657 September 23, 2003 Bullard
6629138 September 30, 2003 Lambert et al.
6636850 October 21, 2003 Lepien
6636901 October 21, 2003 Sudhakaran et al.
6643638 November 4, 2003 Xu
6643652 November 4, 2003 Helgeson et al.
6661889 December 9, 2003 Flockhart et al.
6661908 December 9, 2003 Suchard et al.
6678679 January 13, 2004 Bradford
6678773 January 13, 2004 Marietta et al.
6678882 January 13, 2004 Hurley et al.
6684261 January 27, 2004 Orton et al.
6690788 February 10, 2004 Bauer et al.
6691067 February 10, 2004 Ding et al.
6691230 February 10, 2004 Bardon
6701314 March 2, 2004 Conover et al.
6711565 March 23, 2004 Subramaniam et al.
6721747 April 13, 2004 Lipkin
6728702 April 27, 2004 Subramaniam et al.
6728852 April 27, 2004 Stoutamire
6732095 May 4, 2004 Warshavsky et al.
6732111 May 4, 2004 Brodersen et al.
6748422 June 8, 2004 Morin et al.
6750858 June 15, 2004 Rosenstein
6751663 June 15, 2004 Farrell et al.
6754475 June 22, 2004 Harrison et al.
6756994 June 29, 2004 Tlaskal
6763351 July 13, 2004 Subramaniam et al.
6771706 August 3, 2004 Ling et al.
6772148 August 3, 2004 Baclawski
6772350 August 3, 2004 Belani et al.
6778971 August 17, 2004 Altschuler et al.
6782091 August 24, 2004 Dunning, III
6785341 August 31, 2004 Walton et al.
6788114 September 7, 2004 Krenzke et al.
6792420 September 14, 2004 Chen et al.
RE38633 October 19, 2004 Srinivasan
6804330 October 12, 2004 Jones et al.
6807632 October 19, 2004 Carpentier et al.
6810429 October 26, 2004 Walsh et al.
6820082 November 16, 2004 Cook et al.
6829655 December 7, 2004 Huang et al.
6831668 December 14, 2004 Cras et al.
6839682 January 4, 2005 Blume et al.
6847982 January 25, 2005 Parker et al.
6851089 February 1, 2005 Erickson et al.
6856575 February 15, 2005 Jones
6856992 February 15, 2005 Britton et al.
6859787 February 22, 2005 Fisher et al.
6865546 March 8, 2005 Song
6865566 March 8, 2005 Serrano-Morales et al.
6865575 March 8, 2005 Smith et al.
6867789 March 15, 2005 Allen et al.
6918222 July 19, 2005 Lat et al.
6920615 July 19, 2005 Campbell et al.
6925457 August 2, 2005 Britton et al.
6925609 August 2, 2005 Lucke
6927728 August 9, 2005 Vook et al.
6934702 August 23, 2005 Faybishenko et al.
6940917 September 6, 2005 Menon et al.
6944644 September 13, 2005 Gideon
6954737 October 11, 2005 Kalantar et al.
6956845 October 18, 2005 Baker et al.
6959432 October 25, 2005 Crocker
6961725 November 1, 2005 Yuan et al.
6965889 November 15, 2005 Serrano-Morales et al.
6966033 November 15, 2005 Gasser et al.
6976144 December 13, 2005 Trefler et al.
6985912 January 10, 2006 Mullins et al.
7020869 March 28, 2006 Abrari et al.
7028225 April 11, 2006 Maso et al.
7031901 April 18, 2006 El Ata
7035808 April 25, 2006 Ford
7058367 June 6, 2006 Luo et al.
7058637 June 6, 2006 Britton et al.
7064766 June 20, 2006 Beda et al.
7073177 July 4, 2006 Foote et al.
7076558 July 11, 2006 Dunn
7089193 August 8, 2006 Newbold
7103173 September 5, 2006 Rodenbusch et al.
7124145 October 17, 2006 Surasinghe
7139999 November 21, 2006 Bowman-Amuah
7143116 November 28, 2006 Okitsu et al.
7171145 January 30, 2007 Takeuchi et al.
7171415 January 30, 2007 Kan et al.
7174514 February 6, 2007 Subramaniam et al.
7178109 February 13, 2007 Hewson et al.
7194380 March 20, 2007 Barrow et al.
7289793 October 30, 2007 Norwood et al.
RE39918 November 13, 2007 Slemmer
7302417 November 27, 2007 Iyer
7318020 January 8, 2008 Kim
7318066 January 8, 2008 Kaufman et al.
7334039 February 19, 2008 Majkut et al.
7343295 March 11, 2008 Pomerance
7353229 April 1, 2008 Vilcauskas, Jr. et al.
7398391 July 8, 2008 Carpentier et al.
7406475 July 29, 2008 Dorne et al.
7412388 August 12, 2008 Dalal et al.
7415731 August 19, 2008 Carpentier et al.
7505827 March 17, 2009 Boddy et al.
7526481 April 28, 2009 Cusson et al.
7536294 May 19, 2009 Stanz et al.
7555645 June 30, 2009 Vissapragada
7574494 August 11, 2009 Mayernick et al.
7596504 September 29, 2009 Hughes et al.
7640222 December 29, 2009 Trefler
7647417 January 12, 2010 Taneja
7665063 February 16, 2010 Hofmann et al.
7685013 March 23, 2010 Gendler
7689447 March 30, 2010 Aboujaoude et al.
7711919 May 4, 2010 Trefler et al.
7779395 August 17, 2010 Chotin et al.
7787609 August 31, 2010 Flockhart et al.
7818506 October 19, 2010 Shepstone et al.
7844594 November 30, 2010 Holt et al.
7870244 January 11, 2011 Chong et al.
7937690 May 3, 2011 Casey
7971180 June 28, 2011 Kreamer et al.
7983895 July 19, 2011 McEntee et al.
8001519 August 16, 2011 Conallen et al.
8037329 October 11, 2011 Leech et al.
8073802 December 6, 2011 Trefler
8250525 August 21, 2012 Khatutsky
8335704 December 18, 2012 Trefler et al.
8386960 February 26, 2013 Eismann et al.
8468492 June 18, 2013 Frenkel
8479157 July 2, 2013 Trefler et al.
8516193 August 20, 2013 Clinton et al.
8843435 September 23, 2014 Trefler et al.
8880487 November 4, 2014 Clinton et al.
8924335 December 30, 2014 Trefler et al.
8959480 February 17, 2015 Trefler et al.
9026733 May 5, 2015 Clinton et al.
20010013799 August 16, 2001 Wang
20010035777 November 1, 2001 Wang et al.
20010047355 November 29, 2001 Anwar
20010049682 December 6, 2001 Vincent et al.
20010052108 December 13, 2001 Bowman-Amuah
20010054064 December 20, 2001 Kannan
20020010855 January 24, 2002 Reshef et al.
20020013804 January 31, 2002 Gideon
20020029161 March 7, 2002 Brodersen et al.
20020042831 April 11, 2002 Capone et al.
20020049603 April 25, 2002 Mehra et al.
20020049715 April 25, 2002 Serrano-Morales et al.
20020049788 April 25, 2002 Lipkin et al.
20020054152 May 9, 2002 Palaniappan et al.
20020059566 May 16, 2002 Delcambre et al.
20020070972 June 13, 2002 Windl et al.
20020073337 June 13, 2002 Ioele et al.
20020091677 July 11, 2002 Sridhar
20020091678 July 11, 2002 Miller et al.
20020091710 July 11, 2002 Dunham et al.
20020091835 July 11, 2002 Lentini et al.
20020093537 July 18, 2002 Bocioned et al.
20020107684 August 8, 2002 Gao
20020118688 August 29, 2002 Jagannathan
20020120598 August 29, 2002 Shadmon et al.
20020120627 August 29, 2002 Mankoff
20020120762 August 29, 2002 Cheng et al.
20020133502 September 19, 2002 Rosenthal et al.
20020177232 November 28, 2002 Melker et al.
20020178232 November 28, 2002 Ferguson
20020181692 December 5, 2002 Flockhart et al.
20020184610 December 5, 2002 Chong et al.
20030001894 January 2, 2003 Boykin et al.
20030004934 January 2, 2003 Qian
20030004951 January 2, 2003 Chokshi
20030009239 January 9, 2003 Lombardo et al.
20030014399 January 16, 2003 Hansen et al.
20030037145 February 20, 2003 Fagan
20030050834 March 13, 2003 Caplan
20030050927 March 13, 2003 Hussam
20030050929 March 13, 2003 Bookman et al.
20030061209 March 27, 2003 Raboczi et al.
20030065544 April 3, 2003 Elzinga et al.
20030066031 April 3, 2003 Laane
20030074352 April 17, 2003 Raboczi et al.
20030074369 April 17, 2003 Scheutze et al.
20030084401 May 1, 2003 Abel et al.
20030109951 June 12, 2003 Hsiung et al.
20030115281 June 19, 2003 McHenry et al.
20030135358 July 17, 2003 Lissauer et al.
20030152212 August 14, 2003 Burok et al.
20030154380 August 14, 2003 Richmond et al.
20030191626 October 9, 2003 Al-Onaizan et al.
20030198337 October 23, 2003 Lenard
20030200254 October 23, 2003 Wei
20030200371 October 23, 2003 Abujbara
20030202617 October 30, 2003 Casper
20030222680 December 4, 2003 Jaussi
20030229529 December 11, 2003 Mui et al.
20030229544 December 11, 2003 Veres et al.
20040024603 February 5, 2004 Mahoney et al.
20040034651 February 19, 2004 Gupta et al.
20040049479 March 11, 2004 Dorne et al.
20040049509 March 11, 2004 Keller et al.
20040054610 March 18, 2004 Amstutz et al.
20040064552 April 1, 2004 Chong et al.
20040068517 April 8, 2004 Scott
20040088199 May 6, 2004 Childress et al.
20040103014 May 27, 2004 Teegan et al.
20040117759 June 17, 2004 Rippert et al.
20040122652 June 24, 2004 Andrews et al.
20040133416 July 8, 2004 Fukuoka et al.
20040133876 July 8, 2004 Sproule
20040139021 July 15, 2004 Reed et al.
20040145607 July 29, 2004 Alderson
20040147138 July 29, 2004 Vaartstra
20040162812 August 19, 2004 Lane et al.
20040162822 August 19, 2004 Papanyan et al.
20040167765 August 26, 2004 El Ata
20040205672 October 14, 2004 Bates et al.
20040220792 November 4, 2004 Gallanis et al.
20040236566 November 25, 2004 Simske
20040243587 December 2, 2004 Nuyens et al.
20040268221 December 30, 2004 Wang
20040268299 December 30, 2004 Lei et al.
20050027563 February 3, 2005 Fackler et al.
20050039191 February 17, 2005 Hewson et al.
20050044198 February 24, 2005 Okitsu et al.
20050050000 March 3, 2005 Kwok et al.
20050055330 March 10, 2005 Britton et al.
20050059566 March 17, 2005 Brown et al.
20050060372 March 17, 2005 DeBettencourt et al.
20050071211 March 31, 2005 Flockhart et al.
20050104628 May 19, 2005 Tanzawa et al.
20050125683 June 9, 2005 Matsuyama et al.
20050132048 June 16, 2005 Kogan et al.
20050138162 June 23, 2005 Byrnes
20050144023 June 30, 2005 Aboujaoude et al.
20050165823 July 28, 2005 Ondrusek et al.
20050198021 September 8, 2005 Wilcox et al.
20050216235 September 29, 2005 Butt et al.
20050228875 October 13, 2005 Monitzer et al.
20050234882 October 20, 2005 Bennett et al.
20050267770 December 1, 2005 Banavar et al.
20050288920 December 29, 2005 Green et al.
20060004845 January 5, 2006 Kristiansen et al.
20060015388 January 19, 2006 Flockhart et al.
20060020783 January 26, 2006 Fisher
20060041861 February 23, 2006 Trefler et al.
20060053125 March 9, 2006 Scott
20060063138 March 23, 2006 Loff et al.
20060064486 March 23, 2006 Baron et al.
20060064667 March 23, 2006 Freitas
20060075360 April 6, 2006 Bixler
20060080082 April 13, 2006 Ravindra et al.
20060080401 April 13, 2006 Gill et al.
20060092467 May 4, 2006 Dumitrescu et al.
20060100847 May 11, 2006 McEntee et al.
20060101386 May 11, 2006 Gerken et al.
20060101393 May 11, 2006 Gerken et al.
20060106846 May 18, 2006 Schulz et al.
20060139312 June 29, 2006 Sinclair et al.
20060149751 July 6, 2006 Jade et al.
20060167655 July 27, 2006 Barrow et al.
20060173724 August 3, 2006 Trefler et al.
20060173871 August 3, 2006 Taniguchi et al.
20060206303 September 14, 2006 Kohlmeier et al.
20060206305 September 14, 2006 Kimura et al.
20060218166 September 28, 2006 Myers et al.
20060271559 November 30, 2006 Stavrakos et al.
20060271920 November 30, 2006 Abouelsaadat
20060288348 December 21, 2006 Kawamoto et al.
20070005623 January 4, 2007 Self et al.
20070010991 January 11, 2007 Lei et al.
20070028225 February 1, 2007 Whittaker et al.
20070038765 February 15, 2007 Dunn
20070055938 March 8, 2007 Herring et al.
20070061789 March 15, 2007 Kaneko et al.
20070094199 April 26, 2007 Deshpande et al.
20070118497 May 24, 2007 Katoh
20070130130 June 7, 2007 Chan et al.
20070136068 June 14, 2007 Horvitz
20070143163 June 21, 2007 Weiss et al.
20070143851 June 21, 2007 Nicodemus et al.
20070203756 August 30, 2007 Sears et al.
20070208553 September 6, 2007 Hastings et al.
20070226031 September 27, 2007 Manson et al.
20070233902 October 4, 2007 Trefler et al.
20070239646 October 11, 2007 Trefler
20070260584 November 8, 2007 Marti et al.
20070294644 December 20, 2007 Yost
20080002823 January 3, 2008 Fama et al.
20080046462 February 21, 2008 Kaufman et al.
20080077384 March 27, 2008 Agapi et al.
20080085502 April 10, 2008 Allen et al.
20080109467 May 8, 2008 Brookins et al.
20080163253 July 3, 2008 Massmann et al.
20080184230 July 31, 2008 Leech et al.
20080189679 August 7, 2008 Rodriguez et al.
20080195377 August 14, 2008 Kato et al.
20080196003 August 14, 2008 Gerken et al.
20080208785 August 28, 2008 Trefler et al.
20080216055 September 4, 2008 Khatutsky
20080216060 September 4, 2008 Vargas
20080263510 October 23, 2008 Nerome et al.
20090007084 January 1, 2009 Conallen et al.
20090018998 January 15, 2009 Patten, Jr. et al.
20090075634 March 19, 2009 Sinclair et al.
20090083697 March 26, 2009 Zhang et al.
20090132232 May 21, 2009 Trefler
20090138844 May 28, 2009 Halberstadt et al.
20090158407 June 18, 2009 Nicodemus et al.
20090164494 June 25, 2009 Dodin
20090171938 July 2, 2009 Levin et al.
20090276206 November 5, 2009 Fitzpatrick et al.
20090282384 November 12, 2009 Keppler
20100011338 January 14, 2010 Lewis
20100088266 April 8, 2010 Trefler
20100107137 April 29, 2010 Trefler et al.
20100217737 August 26, 2010 Shama
20120041921 February 16, 2012 Canaday et al.
20130007267 January 3, 2013 Khatutsky
20130231970 September 5, 2013 Trefler et al.
20130254833 September 26, 2013 Nicodemus et al.
20140019400 January 16, 2014 Trefler et al.
20150089406 March 26, 2015 Trefler et al.
Foreign Patent Documents
19911098 December 1999 DE
0 549 208 June 1993 EP
0 669 717 August 1995 EP
0 996 916 May 2000 EP
1 015 997 July 2000 EP
1 019 807 July 2000 EP
1 073 955 February 2001 EP
1 073 992 February 2001 EP
1 135 723 September 2001 EP
1 163 604 December 2001 EP
1 183 636 March 2002 EP
1 196 882 April 2002 EP
1 203 310 May 2002 EP
1 208 482 May 2002 EP
1 212 668 June 2002 EP
1 240 592 September 2002 EP
1 277 102 January 2003 EP
1 277 119 January 2003 EP
1 277 120 January 2003 EP
1 277 153 January 2003 EP
1 277 155 January 2003 EP
1 277 329 January 2003 EP
1 374 083 January 2004 EP
1 382 030 January 2004 EP
1 386 241 February 2004 EP
1 393 172 March 2004 EP
1 393 188 March 2004 EP
1 402 336 March 2004 EP
1 407 384 April 2004 EP
1 430 396 June 2004 EP
1 438 649 July 2004 EP
1 438 654 July 2004 EP
1 438 672 July 2004 EP
1 483 685 December 2004 EP
1 490 747 December 2004 EP
1 490 809 December 2004 EP
1 492 232 December 2004 EP
1 782 183 May 2007 EP
1 830 312 September 2007 EP
1 840 803 October 2007 EP
2 115 581 November 2009 EP
98/38564 September 1998 WO
98/40807 September 1998 WO
99/05632 February 1999 WO
99/45465 September 1999 WO
99/50784 October 1999 WO
00/33187 June 2000 WO
00/33217 June 2000 WO
00/33226 June 2000 WO
00/33235 June 2000 WO
00/33238 June 2000 WO
00/52553 September 2000 WO
00/52603 September 2000 WO
00/67194 November 2000 WO
01/40958 June 2001 WO
01/75610 October 2001 WO
01/75614 October 2001 WO
01/75747 October 2001 WO
01/75748 October 2001 WO
01/76206 October 2001 WO
01/77787 October 2001 WO
01/79994 October 2001 WO
02/21254 March 2002 WO
02/44947 June 2002 WO
02/056249 July 2002 WO
02/080006 October 2002 WO
02/080015 October 2002 WO
02/082300 October 2002 WO
02/084925 October 2002 WO
02/088869 November 2002 WO
02/091346 November 2002 WO
02/101517 December 2002 WO
02/103576 December 2002 WO
03/021393 March 2003 WO
03/029923 April 2003 WO
03/029955 April 2003 WO
03/030005 April 2003 WO
03/030013 April 2003 WO
03/030014 April 2003 WO
03/058504 July 2003 WO
03/069500 August 2003 WO
03/071380 August 2003 WO
03/071388 August 2003 WO
03/073319 September 2003 WO
03/077139 September 2003 WO
03/085503 October 2003 WO
03/085580 October 2003 WO
2004/001613 December 2003 WO
2004/003684 January 2004 WO
2004/003766 January 2004 WO
2004/003885 January 2004 WO
2004/046882 June 2004 WO
2004/061815 July 2004 WO
2004/086197 October 2004 WO
2004/086198 October 2004 WO
2004/095207 November 2004 WO
2004/095208 November 2004 WO
2004/114147 December 2004 WO
2005/001627 January 2005 WO
2005/003888 January 2005 WO
2005/010645 February 2005 WO
2005/117549 December 2005 WO
2006/081536 August 2006 WO
2007/033922 March 2007 WO
2008/109441 September 2008 WO
2009/097384 August 2009 WO
Other references
  • [No Author Listed] About the Integrated Work Manager (IWM). Pegasystems, Inc., Apr. 30, 2009, 3 pages, <http://pdn-dev/DevNet/PRPCv5/KB/TMP9ad01zurnf.asp>.
  • [No Author Listed] How to Configure and Customize the Universal Worklist. SAP Netweaver '04 and SAP Enterprise Portal 6.0. SAP AG. Version 1, May 2004, 65 pages. <http://www.erpgenie.com/sap/netweaver/ep/Configuring%20the%20UWL.pdf>.
  • [No Author Listed] How to configure the IWM/IAC gateway. Pegasystems, Inc., Apr. 30, 2009, 4 pages, <http://pdn-dev/DevNet/PRPCv5/KB/TMP9cf8fzurq4.asp>.
  • [No Author Listed] How to install the Integrated Work Manager (IWM). Pegasystems, Inc., Apr. 30, 2009, 6 pages, <http://pdn-dev/DevNet/PRPCv5/KB/TMP9br1ezurp8.asp>.
  • [No Author Listed] HP Integrated Lights-Out 2, User Guide, Part No. 394326-004, HP, Aug. 2006, 189 pages.
  • [No Author Listed] IP Prior Art Database, Options when returning work items in workflow management systems. IBM, IPCOM00002798D, 2004, 3 pages.
  • [No Author Listed] IP Prior Art Database, Staff Queries and Assignments in Workflow Systems. IBM, IPCOM000142382D, 2006, 4 pages.
  • [No Author Listed] IP Prior Art Database, Using work items to manage user interactions with adaptive business services. IBM TDB, IPCOM000015953D, 2003, 4 pages.
  • [No Author Listed] Integrating with External Systems, PegaRULES Process Commander 5.2. Process Commander 5.2 reference. Pegasystems Inc, Cambridge, MA, 2006, 103 pages. <http://pdn.pega.com/ProductSupport/Products/PegaRULESProcessCommander/documents/PRPC/V5/502/iwes/PRPC52IntegratingwithExternalSystems.pdf>.
  • [No Author Listed] Localizing an Application, PegaRULES Process Commander. Process Commander 4.2 reference. Pegasystems Inc., Cambdrige, MA, 2006, 92 pages <http://pdn.pega.com/DevNet/PRPCv4/TechnologyPapers/documents/Localization0402.pdf>.
  • [No Author Listed] Oracle Universal Work Queue: Implementation Guide. Release 11i for Windows NT. Oracle Corporation. Jul. 2001, 136 pages. <http://docs.oracle.com/cd/A8596401/acrobat/ieu115ug.pdf>.
  • Bierbaum, A., et al., VR juggler: A virtual platform for virtual reality application development. Proceedings of the Virtual Reality 2001 Conference, IEEE, 2001, 8 pages, <http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amumber-913774>.
  • Deelman, E., et al., Pegasus: A framework for mapping complex scientific workflows onto distributed systems, submitted to Scientific Programming, Jan. 2005. Pre-journal publication.
  • Deelman, E., et al., Pegasus: A framework for mapping complex scientific workflows onto distributed systems. Scientific Programming, 13, pp. 219-237, 2005.
  • Fayad, M.E., et al., Object-oriented application frameworks. Communications of the ACM, Oct. 1997, vol. 40, issue 10, pp. 32-38, <http://dl.acm.org/citation.cfm?id=262798>.
  • Hague, Darren, Universal Worklist with SAP Netweaver Portal. Galileo Press, 2008, pp. 11-31. <http://www.sap-hefte.de/download/dateien/1461/146leseprobe.pdf>.
  • International Search Report and Written Opinion for Application No. PCT/GB2004/000677, mailed Aug. 2, 2004 (15 pages).
  • International Search Report for Application No. PCT/US2004/020783, mailed Nov. 8, 2005 (2 pages).
  • International Preliminary Report on Patentability for Application No. PCT/US2004/020783, issued Feb. 13, 2006 (6 pages).
  • LaRue, J., Leveraging Integration and Workflow. Integrated Solutions, Accounting Today, SourceMedia, Aug. 2006, pp. 18-19.
  • Mandal, et al., Integrating existing scientific workflow systems: The kepler/pegasus example. USC Information Sciences Institute, 2007, 8 pages.
  • Markiewicz, M.E., et al., Object oriented framework development. ACM, 2001, 13 pages, <http://dl.acm.org/citation.cfm?id=372771>.
  • Marmel, Elaine, Microsoft Office Project 2007 Bible, ISBN 0470009926, Wiley Publishing, Inc., 2007, 961 pages.
  • Pientka, B., et al., Programming with proofs and explicit contexts. International Symposium on Principles and Practice of Declarative Programming, ACM, 2008, pp. 163-173, <http://delivery.acm.org/10.1145/1390000/1389469/p163-pientka.pdf?>.
  • Richner, T., et al., Recovering high-level views of object-oriented applications from static and dynamic information. IEEE, 1999, 10 pages, <http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=792487>.
  • Singh, G., et al., Workflow task clustering for best effort systems with pegasus, Pegasus, 2008, 8 pages.
  • Srinivasan, V., et al., Object persistence in object-oriented applications. IBM Systems Journal, 1997, vol. 36, issue 1, pp. 66-87, <http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber-5387186>.
  • Breiman, L., et al., Bagging predictors, Machine Learning, vol. 24, No. 2, Aug. 31, 1996, pp. 123-140, Kluwer Academic Publishers, Netherlands.
  • Mitchell, T.M., Machine Learning, Chapter 3, 1997, McGraw-Hill, pp. 52-80.
  • Mitchell, T.M., Machine Learning, Chapter 6, 1997, McGraw-Hill, pp. 154-200.
  • [No Author Listed] FreeBSD Project. “EDQUOTA(8)” in Free BSD System Manager's Manual. FreeBSD 8.2 Jun. 6, 1993. pp. 1-2. Retrieved from freebsd.org on Oct. 27, 2011.
  • [No Author Listed] “How SmartForms for Fair Blaze Advisor works”, Fair Issac White Paper, http://www.FAIRISAAC.COM/, Oct. 31, 2005 (website no longer active).
  • [No Author Listed] Solaris 9 resource manager software. A technical white paper. Sun Microsystems, Inc., Palo Alto CA, 2002, 37 pages. XP-002291080. Retrieved Aug. 3, 2004 from <http://wwws.sun.com/software/whitepapers/solaris9/srm.pdf>.
  • Bertino and P. Foscoli, “Index Organizations for Object-Oriented Database Systems,” IEEE Trans. on Knowledge and Data Engineering, 7(2)193-209 (1995).
  • Brusilovsky, P., and De Bra, P., Editors, “Second Workshop on Adaptive Hypertext and Hypermedia Proceedings,” Jun. 20-24, 1998. Ninth ACM Conference on Hypertext and Hypermedia, Hypertext'98. pp. 1-2.
  • Burleson, “Adding behaviors to relational databases,” DBMS, 8(10): 68(5) (1995).
  • Busse, Ralph et al., “Declarative and Procedural Object Oriented Views”, 1998, IEEE retrieved Mar. 22, 2007.
  • Buyya et al., “Economic Models for Resource Management and Scheduling in Grid Computing,” 2002. Concurrency and Computation: Practice and Experience. vol. 14. pp. 1507-1542.
  • Chan and W. Hwang, “Towards Integrating Logic, Object, Frame, and Production,” Proc. Fourth Int'l. Conf. on Software Engineering and Knowledge Engineering, pp. 463-469, Jun. 1992.
  • Cheng and Smith, “Applying Constraint Satisfaction Techniques to Job Shop Scheduling,” 1997. Annals of Operations Research. 70: 327-357 (1997).
  • Cheng, Cheng-Chung; Smith, Stephen F.; “A Constraint Satisfaction Approach to Makespan Scheduling,” AIPS 1996 Proceedings, pp. 45-52 (1996).
  • Cochrane, Roberta et al., “Integrating Triggers and Declarative Constraints in SQL”, p. 567-578, Proceedings of the 22nd VLDB Conference Mumbai (Bombay), India, 1996, retrieved Mar. 22, 2007.
  • Damerau, F.J., Problems and some solutions in customization of natural language database front ends. ACM Transactions on Information Systems, vol. 3, No. 2, Apr. 1, 1985, pp. 165-184.
  • Danforth, “Integrating Object and Relational Technologies,” Proc. Sixteenth Annual Int'l. Computer Software and Applications Conf., pp. 225-226, Sep. 1992 (abstract).
  • DeMichiel, et al., “Polyglot Extensions to Relational Databases for Sharable Types and Functions in a Multi Language Environment,” Proc. Ninth Int'l. Conf. on Data Engineering, pp. 651-660, Apr. 1993.
  • Devarakonda et al., Predictability of process resource usage: A measurement-based study on UNIX. IEEE Transactions on Software Engineering. 1989;15(12):1579-1586.
  • Communication for European Patent Application No. 05755530.2, dated Sep. 6, 2007.
  • European Search Report for Application No. 05755530.2, dated Mar. 26, 2012 (3 Pages).
  • European Office Action issued Aug. 31, 2012 for Application No. 05755530.2 (4 Pages).
  • Communication for European Patent Application No. 07250844.3 enclosing European Search Report, dated Jul. 11, 2007.
  • Communication for European Patent Application No. 07250844.3, dated Mar. 28, 2008.
  • European Office Action issued Jul. 9, 2012 for Application No. 07250844.3 (8 Pages).
  • Communication for European Patent Application No. 07250848.4, dated Aug. 13, 2007 (EESR enclosed).
  • Communication for European Patent Application No. 07250848.4, dated May 29, 2008.
  • Communication for European Patent Application No. 08731127.0, dated Oct. 13, 2009.
  • Extended European Search Report issued Oct. 29, 2012 for Application No. 08731127.0 (8 Pages).
  • Francisco, S. et al. “Rule-Based Web Page Generation” Proceedings of the 2nd Workshop on Adaptive Hypertext and Hypermedia, Hypertext'98, Jun. 20-24, 1998.
  • Gajos et al. SUPPLE: Automatically Generating User Interfaces. IUI 2004, 8 pages.
  • International Search Report for PCT/US051018599, dated May 15, 2007.
  • International Preliminary Report on Patentability for PCT/US2005/018599, dated Jun. 5, 2007.
  • International Search Report & Written Opinion for PCT/US06/03160, mailed Jul. 21, 2008.
  • International Preliminary Report on Patentability for PCT/US06/03160, dated Apr. 9, 2009.
  • International Search Report for PCT/US08/55503, mailed Jul. 28, 2008.
  • International Preliminary Report on Patentability for PCT/US2008/055503, mailed Sep. 17, 2009.
  • International Search Report & Written Opinion for PCT/US09/32341, mailed Mar. 11, 2009.
  • International Preliminary Report on Patentability for PCT/US2009/032341, mailed Aug. 12, 2010.
  • Johnson et al., Sharing and resuing rules-a feature comparison of five expert system shells. IEEE Expert, IEEE Services Center, New York, NY, vol. 9, No. 3, Jun. 1, 1994, pp. 3-17.
  • Jones et al., A user-centered approach to functions in excel. International Conference on Functional Programming, Uppsala, Jun. 30, 2003, pp. 1-12.
  • Kim, “Object-Oriented Databases: Definition and Research Directions,” IEEE Trans. on Knowledge and Data Engineering, vol. 2(3) pp. 327-341, Sep. 1990.
  • Kuhn, H.W. “The Hungarian Method for the Assignment Problem,” Naval Research Logistics Quarterly, 2 (1955), pp. 83-97.
  • Kuno and E.A. Rundensteiner, “Augmented Inherited Multi-Index Structure for Maintenance of Materialized Path Query Views,” Proc. Sixth Int'l. Workshop on Research Issues in Data Engineering, pp. 128-137, Feb. 1996.
  • Lippert, Eric, “Fabulous Adventures in Coding: Metaprogramming, Toast and the Future of Development Tools,” Microsoft.com Blog, MSDN Home, published Mar. 4, 2004, 6 pgs.
  • Manghi, Paolo et. al. “Hybrid Applications Over XML: Integrating the Procedural and Declarative Approaches”, 2002 ACM, pp. 1-6. Retrieved Mar. 22, 2007.
  • Markowitz and A. Shoshani, “Object Queries over Relational Databases: Language, Implementation, and Applications,” IEEE Xplore, pp. 71-80, Apr. 1993.
  • Maryanski, et al., “The Data Model Compiler: A Tool for Generating Object-Oriented Database Systems,” 1986 Int'l. Workshop on Object-Oriented Database Systems, 73-84 (1986).
  • McConnell, Steven C., “Brooks' Law Repealed,” IEEE Software, pp. 6-9, Nov./Dec. 1999.
  • Mecca, G. et al. “Cut and Paste”, ACM, pp. 1-25 and Appendix I-IV (1999). Retrieved Mar. 22, 2007.
  • Morizet-Mahoudeaux, “A Hierarchy of Network-Based Knowledge Systems,” IEEE Trans. on Systems, Man, and Cybernetics, vol. 21(5), pp. 1184-1191, Sep. 1991.
  • Reinersten, Don, “Is It Always a Bad Idea to Add Resources to a Late Project?,” Oct. 30, 2000. Electronic Design. vol. 48, Issue 22, p. 70.
  • Riccuiti, M., Oracle 8.0 on the way with objects: upgrade will also build in multidimensional engine. InfoWorld. Sep. 25, 1995;17(39):16.
  • Salvini and M.H. Williams, “Knowledge Management for Expert Systems,” IEE Colloquium on ‘Knowledge Engineering’, 3 pages, May 1990.
  • Schiefelbein, Mark A Backbase Ajax Front-end for J2EE Applications, Internet Article, http://dev2dev.bea.com/1pt/a/433>, Aug. 29, 2005.
  • Sellis, et al., “Coupling Production Systems and Database Systems: A Homogeneous Approach,” IEEE Trans. on Knowledge and Data Engineering, vol. 5(2), pp. 240-256, Apr. 1993.
  • Shyy and S.Y.W. Su, “Refinement Preservation for Rule Selection in Active Object-Oriented Database Systems,” Proc. Fourth Int'l Workshop on Research Issues in Data Engineering, pp. 115-123, Feb. 1994.
  • Smedley, T.J. et al., “Expanding the Utility of Spreadsheets Through the Integration of Visual Programming and User Interface Objects,” School of Computer Science, Technical University of Nova Scotia, ACM, 1996; pp. 148-155.
  • Stonebraker, “The Integration of Rule Systems and Database Systems,” IEEE Trans. on Knowledge and Data Engineering, vol. 4(5), pp. 415-423, Oct. 1992.
  • Sun, et al., “Supporting Inheritance in Relational Database Systems,” IEEE, pp. 511-518, Jun. 1992.
  • Thuraisingham, “From Rules to Frames and Frames to Rules,” AI Expert, pp. 31-39, Oct. 1989.
  • Vranes, S. “Integrating Multiple Paradigms within the Blackboard Framework,” IEEE Transactions on Software Engineering, vol. 21, No. 3, Mar. 1995, pp. 244-262.
  • Yang, Bibo; Geunes, Joseph; O'Brien, William J.; “Resource-Constrained Project Scheduling: Past Work and New Directions,” Apr. 2001.
Patent History
Patent number: 9270743
Type: Grant
Filed: Oct 29, 2014
Date of Patent: Feb 23, 2016
Patent Publication Number: 20150127736
Assignee: Pegasystems Inc. (Cambridge, MA)
Inventor: Benjamin A. Frenkel (Cambridge, MA)
Primary Examiner: Rehana Perveen
Assistant Examiner: Alexander Khong
Application Number: 14/527,348
Classifications
Current U.S. Class: Firewall (726/11)
International Classification: G06F 17/30 (20060101); H04L 29/08 (20060101);