Processing model-based commands for distributed applications

- Microsoft

The present invention extends to methods, systems, and computer program products for processing model based commands for distributed applications. Embodiments facilitate execution of model-based commands, including software lifecycle commands, using model-based workflow instances. Data related to command execution is stored in a shared repository such that command processors can understand their status in relationship to workflow instances. Further, since the repository is shared, command execution can be distributed and balanced across a plurality of different executive services. Embodiments also include model-based error handling and error recovery mechanisms.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

N/A.

BACKGROUND Background and Relevant Art

Computer systems and related technology affect many aspects of society. Indeed, the computer system's ability to process information has transformed the way we live and work. Computer systems now commonly perform a host of tasks (e.g., word processing, scheduling, accounting, etc.) that prior to the advent of the computer system were performed manually. More recently, computer systems have been coupled to one another and to other electronic devices to form both wired and wireless computer networks over which the computer systems and other electronic devices can transfer electronic data. Accordingly, the performance of many computing tasks are distributed across a number of different computer systems and/or a number of different computing components.

As computerized systems have increased in popularity, so have the complexity of the software and hardware employed within such systems. In general, the need for seemingly more complex software continues to grow, which further tends to be one of the forces that push greater development of hardware. For example, if application programs require too much of a given hardware system, the hardware system can operate inefficiently, or otherwise be unable to process the application program at all. Recent trends in application program development, however, have removed many of these types of hardware constraints at least in part using distributed application programs.

In general, distributed application programs comprise components that are executed over several different hardware components. Distributed application programs are often large, complex, and diverse in their implementations. Further, distributed applications can be multi-tiered and have many (differently configured) distributed components and subsystems, some of which are long-running workflows and legacy or external systems (e.g., SAP). One can appreciate, that while this ability to combine processing power through several different computer systems can be an advantage, there are various complexities associated with distributing application program modules.

For example, the very distributed nature of business applications and variety of their implementations creates a challenge to consistently and efficiently manage their lifecycle. The challenge is due at least in part to diversity of implementation technologies composed into a distributed application program. That is, diverse parts of a distributed application program have to behave coherently and reliably. Typically, different parts of a distributed application program are individually and manually made to work together. For example, a user or system administrator creates text documents describing commands that indicate, for example, how and when to verify, clean, check, fix, deploy, start, stop, undeploy, etc., parts of an application and what to do when failures occur. Accordingly, it is then commonly a manual task to act upon the commands described in these text documents.

BRIEF SUMMARY

The present invention extends to methods, systems, and computer program products for processing model-based commands for distributed applications. A command request for a distributed application is received. The command request includes a command reference to a command definition model defining a corresponding command and includes a reference to model. The command request indicates that the corresponding command is to be implemented for the reference.

A command record for the received command request is created in a repository. The command record stores information related to implementing the command request. A workflow model is identified from the command definition model. The workflow model describes how to implement the received command request.

The workflow model is accessed from the repository and an instance of the workflow is created from the workflow model. The workflow instance has a command ID and includes a set of pre-defined activities configured to interoperate to implement the command request. The command ID is stored in the command record. The application reference is submitted to the workflow instance to initiate the workflow instance. Information related to the behavior of the workflow instance is recorded within the command record as the workflow instance implements the command request for the application model.

In some embodiments, application models are locked during command implementation and then released. For example, an application model is locked to prevent another command from executing the same model while the workflow instance implements the command request for the application model. Subsequently, the workflow instance completes or a request to stop the workflow instance is received prior to the workflow instance completing implementation of the command request. A stop request is issued to the workflow instance. It is determined that the workflow instance is stopped. The lock on the application model is released such that other commands can be implemented for the application model.

In other embodiments, multiple services interoperate to distributed implementation of a command. A first executive service receives a command request for a distributed application. The first executive service determines that it is already running a plurality of other model-based commands. The first executive service queries the repository to discover other executive services. The first executive service receives an indication that a second executive service can be communicated with to load balance processing of model-based commands. The first executive service passes the command request to the second executive service in response to the indication. Accordingly, the load of command processing can be balanced across a plurality of executive services.

This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.

Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.

BRIEF DESCRIPTION OF THE DRAWINGS

In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:

FIGS. 1A and 1B illustrate an example computer architecture that facilitates processing model-based commands for distributed applications.

FIG. 2 illustrates example relationships between various models that can interoperate to process model-based commands.

FIG. 3 illustrates a flow chart of an example method for processing a model-based command.

FIG. 4A illustrates an example data flow for a command protocol for processing a model-based command.

FIG. 4B illustrates an example data flow for a command protocol to stop processing a model-based command.

FIG. 5 illustrates an example computer architecture that facilitates distributing the implementation of a model-based command.

FIG. 6 illustrates a flow chart of an example method for distributing the implementation of a model-based command.

DETAILED DESCRIPTION

The present invention extends to methods, systems, and computer program products for processing model-based commands for distributed applications. A command request for a distributed application is received. The command request includes a command reference to a command definition model defining a corresponding command and includes a reference to model. The command request indicates that the corresponding command is to be implemented for the reference.

A command record for the received command request is created in a repository. The command record stores information related to implementing the command request. A workflow model is identified from the command definition model. The workflow model describes how to implement the received command request.

The workflow model is accessed from the repository and an instance of the workflow is created from the workflow model. The workflow instance has a command ID and includes a set of pre-defined activities configured to interoperate to implement the command request. The command ID is stored in the command record. The application reference is submitted to the workflow instance to initiate the workflow instance. Information related to the behavior of the workflow instance is recorded within the command record as the workflow instance implements the command request for the application model.

In some embodiments, application models are locked during command implementation and then released. For example, an application model is locked to prevent another command from executing the same model while the workflow instance implements the command request for the application model. Subsequently, the workflow instance completes or a request to stop the workflow instance is received prior to the workflow instance completing implementation of the command request. A stop request is issued to the workflow instance. It is determined that the workflow instance is stopped. The lock on the application model is released such that other commands can be implemented for the application model.

In other embodiments, multiple services interoperate to distributed implementation of a command. A first executive service receives a command request for a distributed application. The first executive service determines that it is already running a plurality of other model-based commands. The first executive service queries the repository to discover other executive services. The first executive service receives an indication that a second executive service can be communicated with to load balance processing of model-based commands. The first executive service passes the command request to the second executive service in response to the indication. Accordingly, the load of command processing can be balanced across a plurality of executive services.

Embodiments of the present invention may comprise or utilize a special purpose or general-purpose computer including computer hardware, as discussed in greater detail below. Embodiments within the scope of the present invention also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are physical storage media. Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: physical storage media and transmission media.

Physical storage media includes RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.

A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry or desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.

Further, it should be understood, that upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to physical storage media (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile physical storage media at a computer system. Thus, it should be understood that physical storage media can be included in computer system components that also (or even primarily) utilize transmission media.

Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.

Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.

FIGS. 1A and 1B illustrates an example computer architecture 100 that facilitates processing model-based commands for distributed applications. Referring initially to FIG. 1A, tools 125, repository 120, executive service 115, driver services 140, and host environments 135 are depicted in computer architecture 100. Each of the depicted components can be connected to one another over a network, such as, for example, a Local Area Network (“LAN”), a Wide Area Network (“WAN”), and even the Internet. Accordingly, each of the depicted components as well as any other connected components, can create message related data and exchange message related data (e.g., Internet Protocol (“IP”) datagrams and other higher layer protocols that utilize IP datagrams, such as, Transmission Control Protocol (“TCP”), Hypertext Transfer Protocol (“HTTP”), Simple Mail Transfer Protocol (“SMTP”), etc.) over the network.

As depicted, tools 125 can be used to write, modify, and store application models 151, such as, for example, declarative application model 153, in repository 120. Declarative models are used to describe the structure and behavior of real-world applications. Thus, a user (e.g., distributed application program developer) can use one or more of tools 125 to create declarative application model 153.

Generally, declarative models include one or more sets of high-level instructions expressing application intent for a distributed application. Thus, the high-level instructions generally describe operations and/or behaviors of one or more modules in the distributed application program. However, the high-level instructions do not necessarily describe implementation steps required to deploy a distributed application having the particular operations/behaviors (although they can if appropriate). For example, declarative application model 153 can express the generalized intent of a workflow, including, for example, that a first Web service be connected to a database. However, declarative application model 153 does not necessarily describe how (e.g., protocol), when, where (e.g., URI endpoint), etc., the Web service and database are to be connected to one another.

Generally, to implement a command for an application based on a declarative model, the command and a reference to the declarative model can be sent to executive service 115. Executive service 115 can refine the declarative model until there are no ambiguities and the details are sufficient for drivers (e.g., included in driver services 140) to consume. Thus, executive service 115 can receive and refine declarative application model 153 so that declarative application model 153 can be translated by drivers (e.g., one or more technology-specific drivers) included in driver services 140.

Tools 125 and executive service 115 can exchange commands for model-based applications and corresponding results using command protocol 181. For example, tools 125 can send command 129 to executive services 115 to perform a command for a model based application. Executive service 115 can report result 137 back to tools 125 to indicate the results and/or progress of command 129. Generally, a command represents an operation to be performed on a model. Operations include creating, verifying, re-verifying, cleaning, deploying, undeploying, checking, fixing, updating, monitoring, starting and stopping distributed applications based on corresponding declarative models.

In general, “refining” a declarative model can include some type of work breakdown structure, such as, for example, progressive elaboration, so that the declarative model instructions are sufficiently complete for translation by driver services 140. Since declarative models can be written relatively loosely by a human user (i.e., containing generalized intent instructions or requests), there may be different degrees or extents to which executive service 115 modifies or supplements a declarative model for implementing a command for an application. Work breakdown module 116 can implement a work breakdown structure algorithm, such as, for example, a progressive elaboration algorithm, to determine when an appropriate granularity has been reached and instructions are sufficient for drivers.

Executive service 115 can also account for dependencies and constraints included in a declarative model. For example, executive service 115 can be configured to refine declarative application model 153 based on semantics of dependencies between elements in the declarative application model 153 (e.g., one web service connected to another). Thus, executive service 115 and work breakdown module 116 can interoperate to output detailed application model 153D that provides driver services 140 with sufficient information to realize distributed application 107.

In additional or alternative implementations, executive service 115 can also be configured to refine the declarative application model 153 for some other contextual awareness. For example, executive service 115 can refine information about the inventory of host environments 135 that may be available in the datacenter where a distributed application 107 is to be deployed. Executive service 115 can reflect contextual awareness information in detailed application model 153D.

In addition, executive service 115 can be configured to fill in missing data regarding computer system assignments. For example, executive service 115 can identify a number of different distributed application program modules in declarative application model 153 that have no requirement for specific computer system addresses or operating requirements. Thus, executive service 115 can assign distributed application program modules to an available host environment on a computer system. Executive service 115 can reason about the best way to fill in data in a refined declarative application model 153. For example, as previously described, executive service 115 may determine and decide which transport to use for an endpoint based on proximity of connection, or determine and decide how to allocate distributed application program modules based on factors appropriate for handling expected spikes in demand. Executive service 115 can then record missing data in detailed declarative model 153D (or segment thereof).

In addition or alternative implementations, executive service 115 can be configured to compute dependent data in the declarative application model 153. For example, executive service 115 can compute dependent data based on an assignment of distributed application program modules to application containers on computer systems. Thus, executive service 115 can calculate URI addresses on the endpoints, and propagate the corresponding URI addresses from provider endpoints to consumer endpoints. In addition, executive service 115 may evaluate constraints in the declarative model 153. For example, the executive service 115 can be configured to check to see if two distributed application program modules can actually be assigned to the same machine, and if not, executive service 115 can refine detailed application model 153D to accommodate this requirement.

Accordingly, after adding appropriate data (or otherwise modifying/refining) to declarative application model 153 (to create detailed application model 153D), executive service 115 can finalize the refined detailed application model 153D so that it can be translated by technology-specific drivers in drivers services 140. To finalize or complete the detailed application model 153D, executive service 115 can, for example, partition a declarative application model into segments (e.g., application modules) so that target drivers can request for detailed info about individual segments. Thus, executive service 115 can tag each declarative application model (or segment thereof) with its target driver (e.g., the address of a technology-specific driver).

Furthermore, executive service 115 can verify that a detailed application model (e.g., 153D) can actually be translated by one or more technology-specific drivers, and, if so, pass the detailed application model (or segment thereof) to a particular technology-specific driver for translation. For example, executive service 115 can be configured to tag portions of detailed application model 153D with labels indicating an intended implementation for portions of detailed application model 153D. An intended implementation can indicate a framework and/or a host, such as, for example, WCF-IIS, Aspx-IIS, SQL, Axis-Tomcat, WF/WCF-WAS, etc.

After refining a model, executive service 115 can store the refined model back in repository 120 for later use. Thus, executive service 115 can forward detailed application model 153D to driver services 140 or store detailed application model 153D in repository 120. When detailed application model 153D is stored in repository 120, it can be subsequently provided to driver services 140 without further refinements.

Executive service 115 and driver services 140 can perform requested commands for model-based applications using commands and models protocol 182. For example, executive service 115 can send detailed application model 153D and command 129 to driver services 140. Driver services 140 can report return result 136 back to executive service 115 to indicate the results and/or progress of command 129.

Upon receiving detailed application model 153D and command 129, driver services 140 can take actions (e.g., actions 133) to implement an operation for a distributed application (e.g., distributed application 107, including application parts 107A and 107B) based on detailed application model 153D. Driver services 140 interoperate with one or more (e.g., technology-specific) drivers and translators to translate detailed application module 153D (or declarative application model 153) into one or more (e.g., technology-specific) actions 133. Actions 133 can be used to realize command 129 for a model-based application.

Thus, distributed application 107 can be implemented in host environments 135. Each application part, for example, 107A, 107B, etc., can be implemented in a separate host environment and connected to other application parts via corresponding endpoints.

Accordingly, the generalized intent of declarative application model 135, as refined by executive service 115 and implemented by drivers accessible to driver services 140, is expressed in one or more of host environments 135. For example, when the general intent of declarative application model 153 is to connect two Web services, specifics of connecting the first and second Web services can vary depending on the platform and/or operating environment. For example, when deployed within the same data center Web service endpoints can be configured to connect using TCP. On the other hand, when the first and second Web services are on opposite sides of a firewall, the Web service endpoints can be configured to connect using a relay connection.

Distributed application programs can provide operational information about execution. For example, during execution distributed application can emit events 134 indicative of events (e.g. execution or performance issues) that have occurred at a distributed application. In one implementation, driver services 140 collects emitted events and sends out an event stream to a monitoring service on a continuous, ongoing basis, while, in other implementations, an event stream is sent out on a scheduled basis (e.g., based on a schedule setup by a corresponding technology-specific driver). The monitoring service can perform analysis, tuning, and/or other appropriate model modifications.

FIG. 1B depicts an example expanded view of executive service 115 and repository 120. Executive service 115 includes command processor 141 and workflow runtime 142. Command processor 141 and workflow runtime 142 are configured to interoperate to create workflows (e.g., workflow 142) for processing received commands.

Generally, workflows are composed of a set of activities, such as, for example, provided by a command processor activity library or the Windows® Workflow Foundation Service (“WF”). A command processor library consists of a set of well-defined activities. Use of a library facilities simple and efficient workflow authoring constrained to safe execution by an executive service. Activities can be categorized into at least two groups: command-specific and model-specific. Command specific activities include driver activities that interacts with driver services 140 to issue driver commands, such as, for example, Verify and Deploy. Model-specific activities include state-related activities such as, for example, SetState, GetState and execution-related activities such as, for example, GenerateExecutionPlan, and etc. State-related activities relate to retrieval and update of a subject state in the Repository. GenerateExecutionPlan analyzes dependencies declared in a model and creates an action plan for model parts to be executed in an orderly manner.

In same environments, various default workflows, such as, for example, Verify, Deploy, Start/Stop, Fix, can be provided. A particular workflow is identified as implementation of a certain command. The relationship between command and workflow can be 1:1, by default. However, it is also possible to have one to many associations, and executive service 115 is extensible for allowing such functionality.

In addition to application models, repository 120 is further configured to store other types of models, such as, for example, workflow models (e.g., 161), command definition models (e.g., 162), command record models (e.g., 163), and command output models (e.g., 164).

Generally, command processor 141 is configured to receive commands (e.g., 129) that request to operate on models in the repository. Commands can be defined through workflows put together using a pre-defined activity library. In some embodiments, received commands are software lifecycle commands. Accordingly, commands can be mapped to individual workflows to drive model-based applications through there respective lifecycles (e.g., valid, ready, deployed, running, etc.).

Thus, when a command is issued to command processor 141 on a subject model, command processor 141 facilitates execution of a workflow that corresponds to the command. The workflow could be short lived or a long running process, which may sometimes take days or longer to complete. While this command is being executed, command processor 141 can lock the subject model so that the same command or any other command cannot be issued against the subject.

Command processor 141 also includes a built-in command logging mechanism. Each log entry associated with a command can be captured in a command output stored in repository 120. A complete command log is the history of all the operations that have been performed by the command. Possible records can included: the selected workflow, start time for the workflow, entrance and exit of workflow activities, messages received from drivers services 140 regarding logging, progress, results, and etc., user specific data, detailed error cases including references to models that cause errors.

Accordingly, since repository 120 maintains command output (e.g., in log entries) for models, queries against command output provide rich set of information about applications and commands. Queries can be issued against current as well as past command executions. Thus, Tools 125 (or a user) can access the maintained command output to reason about problems with models and to act upon them accordingly. For example, command status can be fetched to understand progress of a command.

In addition, complex queries can be created to have in-depth understanding of commands and their command output, based on various input criteria. For example, queries “by command”, “by requestor”, “by model”, “by time”, etc., as well as combinations thereof can be issued. These and other types of queries against command execution information provide enhanced visibility into operations within the system.

FIG. 2 illustrates example relationships 200 between various models that can interoperate to process model-based commands. Generally, an instance of command record 205 maintains specific information about a command request. Instances of command record 205 can include a unique identifier representing a command ID, a reference to an instance of command definition 201, a data and time when the command is created and modified, a command status, a reference to an instance of a subject model 205 (e.g., an application model) upon which the command is to run, an workflow ID identifying the workflow instance executing the command, and a set of parameters for the workflow.

Instances of command output record 203 can be associated with command record 205 to keep track of logging information for the command. Instances of command output record 203 can include a data and time of creation, an output type, a message, and an output source.

As previously described, an instance of command record 202 can reference an instance of command definition 201. An instance of command definition 201 can include a command name, an instance of workflow definition 204, and a set of name-value pairs of parameters a workflow is to accept.

An instance of workflow definition 204 is a descriptive (e.g., XML or XAML) representation of a workflow.

Accordingly, when command processor 141 receives a command request, it identifies a corresponding command definition in repository 120. From the command definition, command processor 141 further determines the appropriate workflow model that is to be used to implement the command request. Received commands also includes a reference to a subject (e.g., application) model to be acted upon. Thus, after a workflow is identified, the model reference is passed to the workflow

Command parameters from a command request can be validated against the parameters expected by the workflow (e.g., in command definition model 162A). If the parameters are validity check goes through, command processor 141 creates a command record (e.g., command record models 163A). Command processor then starts execution of a workflow (e.g., based on a workflow model 161A) in its runtime environment (e.g., workflow runtime 142). A workflow instance (e.g., workflow 147) is thus created and associated with the command record. Depending on the progress of workflow execution, the command record is updated accordingly to reflect the running status. Upon the completion or termination of the workflow, the command status is updated with success or failure.

FIG. 3 illustrates a flow chart of an example method for processing a model-based command. Method 300 will be described with respect to the components and data in computer architecture 100.

Method 300 includes an act of receiving a command request for a distributed application, the command request including a command reference to a command definition model defining a corresponding command, the command request also including a reference to an application model, the command request indicating that the corresponding command is to be implemented for the application model (act 301). For example, command processor 141 can receive command 129. Command 129 includes command reference “deploy”, that can be used to refer to a command definition model 162A (a model for implementing a deploy command). Command 129 also includes reference 153R to declarative application model 153. Thus, the command request indicates that a deploy command is to be implemented for declarative application model 153.

Method 300 includes an act of creating a command record for the received command request in the repository, the command record for storing information related to implementing the command request (act 302). For example, command processor 141 can crate command record 163A in repository 120. Command record 163A can store information related to command 129.

Method 300 includes an act of identifying a workflow model from the command definition model, the workflow model describing how to implement the received command request (act 303). For example, command processor 141 can identify workflow model 161A from command definition model 162A. Workflow model 161A describes how to implement command 129 (“deploy”) for declarative application model 153. Method 300 includes an act of accessing the workflow model from the repository (act 304). For example, command processor 141 can access workflow model 161A from repository 120.

Method 300 includes an act of creating an instance of the workflow from the workflow model, the workflow instance having a command ID and including a set of pre-defined activities configured to interoperate to implement the command request (act 305). For example, command process 141 can pass workflow model 161A to workflow runtime 142. Workflow runtime 142 can in turn generate workflow 147 based on workflow model 161A. Workflow instance 147 is created with command ID 148 (to distinguish it from other workflow). Activity library 143 includes pre-defined activities, including command driver 144, get subject state 145, and get execution command 146, configured to interoperate to implement command 129 (“deploy”) for declarative application model 153. Method 300 includes an act of storing the command ID in the command record (act 306). For example, command process 141 can store command ID 148 in command record 163A.

Method 300 includes an act of submitting the application reference to the workflow instance to initiate the workflow instance (act 307). For example, reference 153R can be submitted to workflow 147 to initiate workflow 147. Workflow 147 can use reference 153R to access model 153 and begin processing model 153. Pre-defined activities (e.g., 144, 145, 146, etc.) in activity library 143 can be applied to model 153 to implements command 129 (“deploy”) for declarative application model 153.

Method 300 includes an act of recording information related to the behavior of the workflow instance within the command record as the workflow instance implements the command request for the application model (act 308). For example, during implementation of command 129 (“deploy”) for declarative application model 153, workflow 147 can generate behavior information 145 related to the behavior of workflow 147. Behavior information 145 can be stored in command record 163A, as workflow 147 implements command 129 (“deploy”).

In some embodiments, application models are locked during command implementation and then released. For example, declarative application model 153 can be locked to prevent another command from executing on application model 153 while the workflow instance 147 implements command 129 for declarative application model 153. Subsequently, workflow 147 completes or a request to stop the workflow 147 is received prior to the workflow 147 completing implementation of command 129. When a request to stop (e.g., a cancel or terminate call) is received, a stop request is issued to workflow 147. It is subsequently determined that workflow 147 instance is stopped (e.g., completed, cancelled, or terminated). In response, the lock on declarative application model 153 is released such that other commands can be implemented for declarative application model 153.

Accordingly, command processor 141 provides rich command execution pattern that is asynchronous. For example, command processor 141 can hand off command 129 to workflow runtime 142 that executes workflow 147 in a separate thread.

In some embodiments, once the workflow execution is started, a token is returned to the caller as the command ID for the ‘ExecuteCommand’ call. The command ID can be used later to retrieve the command status using the ‘GetCommandInfo’ method. Depending on the nature of a workflow, the command status may or may not be updated between status retrieval of consecutive calls.

Further, the functionality of workflow 147 can interoperate with work breakdown module 116 to refine a model to a level of detail sufficient for consumption by driver services 140.

FIG. 4A illustrates an example data flow 400 for a command protocol for processing a model-based command. Tools 125 sends ExecuteCommand message 401 to command processor 141. ExecuteCommand message 401 can include a reference to the subject (e.g., application) model and an indication of the command that is to be applied to the subject (e.g., application) model. Command processor 141 can check if another command is in progress against the same subject model. If no other command is to be executed on the subject model, command processor 141 creates a command record for the indicated command. Command processor 141 identifies the appropriate workflow model to implement the indicated command.

Command processor 141 sends CreateWorkflow message 402 to workflow runtime 142. CreateWorkflow message 402 indicates the workflow type (e.g., by reference to a workflow model) to workflow runtime 142. Command processor 141 then sends StartWorkflow message 403 to workflow runtime 142. In response to StartWorkflow message 403, workflow runtime 142 starts the workflow and creates command ID 404 for the workflow. After the workflow is started, command processor 141 returns command 404 to tools 125.

During workflow execution, command output entries are created and associated with the command record. Upon completion, termination, cancellation, etc. of the workflow, the status of the command record can be updated. Since, the command record is stored repository 120, the command record can be queried by other services in computer architecture 120.

From time to time, tools 125 can query command processor 141 (e.g., as part of command protocol 181) for the status of the workflow. For example, tools 125 can send GetCommandInfo message 405 including command ID 404 to command processor 141. This indicates to command processor 141 that tools 125 is interested in the status of the command. When the workflow is still running, command processor can return message 406 back to tools 125. Subsequently, tools 125 can send GetCommandInfo message 407 including command ID 404 to command processor 141. If the workflow is now completed, command processor can return message 406 back to tools 125.

Command processor 141 can also handle errors that occur during implementation of a command. For example, when processing long running workflows or in interactions with other components error saturations may occur, such as for example, due to misinformation in a model. In general, commands are idempotent and they can easily be reapplied. In the case of commands serving as transitions in a lifecycle, the result of command failure will leave the lifecycle of a subject model in the current state. In order for the failure to be resolved, command processor 141 maintains detailed error information in the command record and its associated command output entries.

Command processor 141 also includes semantics for cancelling and terminating operations. For example, a command could be long running, or get to an unexpected error situation that keeps the command in the running state for some time. Under these (or other) circumstances, a user may choose to cancel such a command. As the result of cancellation, the command record is set to the ‘Cancelled’ state if the operation is completed successfully. As a further operation, if a command does not respond to a cancel request, command processor 141 can issue a Terminate call to attempt to force a workflow instance to terminate. The command record is set to the ‘Terminated’ state once that happens.

FIG. 4B illustrates an example data flow 450 for a command protocol to stop processing a model-based command. Within data flow 450, establishment of a workflow, return of command ID 404, and an initial status check can be performed as described with respect to data flow 400. Generally, cancel and terminate calls can be issued to unlock a model being operated upon to permit another command to be issued.

For example, at some point during execution, tools 125 can issue CancelCommand message 411. CancelCommand message 411 includes command ID 404. In response to receiving CancelCommand message 411, command processor 141 can attempt to Cancel the workflow. Depending on the status of the workflow (e.g., type of error, etc.) cancellation may or may not be successful.

Tools 125 can subsequently query command processor 141 (e.g., as part of command protocol 181) for the status of the workflow. For example, tools 125 can send GetCommandInfo message 412 including command ID 404 to command processor 141. This indicates to command processor 141 that tools 125 is interested in the status of the workflow. When the workflow is still running, command processor can return message 413 back to tools 125. On the other hand when the workflow is cancelled, command processor can return a message indicating a cancelled status back to tools 125.

Message 413 indicates to tools 125 that CancelCommand message 411 was not successful. In response, tools 125 can issue TerminateCommand message 414. TerminateCommand message 414 includes command ID 404. In response to receiving TerminateCommand message 414, command processor 141 can attempt to Terminate the workflow. Tools 125 can subsequently query command processor 141 (e.g., as part of command protocol 181) for the status of the workflow. For example, tools 125 can send GetCommandInfo message 415 including command ID 404 to command processor 141. This indicates to command processor 141 that tools 125 is interested in the status of the workflow. When the workflow is terminated, command processor can return message 416 back to tools 125.

A terminate call can be a more intrusive call that forces a workflow to stop, when less intrusive calls, for example, cancel, are not working. However, when Cancel (or other mechanisms) fail, a terminate call can force a workflow stoppage as a last resort.

In some embodiments, multiple executive services interoperate to process commands for model-based applications. FIG. 5 illustrates an example computer architecture 500 that facilitates distributing the implementation of a model-based command. As depicted, a plurality of executive services including, executing services 115, 515, and 516 share repository 120. Thus, any model stored in repository 120 is available to any of the plurality of executive services. Within architecture 500, repository 120 can also store state and availability information for each executive service. Accordingly, executive services can query repository 120 to become aware of other executive services and determine the state and availability of other executive services.

FIG. 6 illustrates a flow chart of an example method 600 for distributing the implementation of a model-based command. Method 600 will be described with respect to the components and data in computer architecture 500.

Method 600 includes an act of a first executive service receiving a command request for a distributed application, the command request including a command reference to a command definition model defining a corresponding command, the command request also including a reference model, the command request indicating that the corresponding command is to be implemented for the application model (act 601). For example, executive service 115 can receive command 129. As previously described, command 129 includes command reference “deploy”, that can be used to refer to a command definition model 162A (a model for implementing a deploy command). Command 129 also includes reference 153R to declarative application model 153. Thus, the command request indicates that a deploy command is to be implemented for declarative application model 153.

Method 600 includes an act of the first executive service determining that it is already running a plurality of other model-based commands (act 602). For example, executive service 115 can determine that is already processing a plurality of other model-based commands. Based on the current processing demands, executive service 115 can determine that it lacks available resources to process command 129 and/or that processing command 129 would negatively impact the processing of other commands.

Method 600 includes an act of the first executive service querying the repository to determine if other executive services are available (act 603). For example, executive service 115 can send availability request 501 to repository 120 to determine if other executive services are available. Method 600 includes an act of the first executive service receiving an indication that a second executive can be communicated with to load balance processing model-based applications (act 604). For example, executive service 115 can receive indication 502 that indicates executive service 515 can be communicated with to process model-based commands.

Executive service can communicate with executive service 515 to determine that executive service 515 can process command 129. Method 600 includes an act of the first executive service passing the command request to the second executive service in response to the indication that the second executive service is available so as to balance the load of command processing across the plurality of executive services (act 605). For example, executive service 115 can pass command 129 to executive service 515. Accordingly, the balances the load of command processing across the plurality of executive services in computer architecture 500. Executive service 515 can then process command 129 as previously described.

Further, since executive services share the repository, workflow related commands can be submitted to any of the executive services. For example, executive service 115 can receive command status request 503 including the command ID for a workflow previously created by executive service 515. Executive service 115 can refer to repository 120 to obtain the status of the workflow from a corresponding command record (e.g., updated by executive service 515). Executive service 115 can return the status of the workflow in command status 504. Similarly, cancel command 506 can be issued to executive service 516 to cancel the workflow previously created by executive service 515.

Accordingly, embodiments of the present invention facilitate execution of model-based commands, including software lifecycle commands, using model-based workflow instances. Data related to command execution is stored in a shared repository such that command processors can understand their status in relationship to workflow instances. Further, since the repository is shared, command execution can be distributed and balanced accesses a plurality of different executive services. Embodiments also include model-based error handling and error recovery mechanisms.

The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims

1. At a computer system, the computer system including an executive service and a repository that stores models, a method for processing a model-based command for a distributed application, the method comprising:

an act of receiving a command request for a distributed application, wherein the command request includes a command reference to a command definition model that defines a corresponding command and that identifies a workflow model, wherein the command request also includes a reference to an application model for the distributed application that is a declarative model that describes the structure and behavior of the distributed application by defining general operations of one or more modules in the distributed application, and wherein the command request indicates that the corresponding command is to be implemented for the application model;
an act of creating a command record for the received command request in the repository, wherein the command record stores information related to implementing the command request;
an act of identifying the workflow model from the command definition model, wherein the workflow model describes how to implement the received command request for the application model by defining a set of pre-defined activities configured to interoperate to implement the command request for the application model;
an act of accessing the workflow model from the repository;
an act of creating an instance of the workflow from the workflow model, wherein the workflow instance has a command ID and includes the set of pre-defined activities configured to interoperate to implement the command request for the application model;
an act of storing the command ID in the command record;
an act of submitting the application model reference to the workflow instance to initiate the workflow instance for the distributed application; and
an act of recording information related to the behavior of the workflow instance within the command record as the workflow instance implements the command request for the application model and for the distributed application.

2. The method as recited as recited in claim 1, wherein the act of receiving a command request comprises an act of receiving a software lifecycle command request.

3. The method as recited as recited in claim 2, wherein the act of receiving a software lifecycle command request comprises receiving a software lifecycle command request selected from among: verify, clean, check, fix, deploy, start, stop, undeploy.

4. The method as recited in claim 1, wherein the act of creating a command record for the received command request in the repository comprises an act of creating a command record from a corresponding command record model.

5. The method as recited in claim 1, wherein the act of creating an instance of the workflow comprises an act creating a workflow that includes one or more driver-related activities and one or more state-related activities.

6. The method as recited in claim 1, wherein the act of creating an instance of the workflow comprises an act creating a workflow that includes one or more activities pre-defined for use with the command request.

7. The method as recited in claim 1, further comprising:

an act of the workflow instance interoperating with a work breakdown module to prepare the application model for consumption by driver services.

8. The method as recited in claim 1, further comprising:

an act of locking the application model while the workflow instance implements the command request for the application model.

9. The method as recited in claim 1, further comprising:

an act of receiving a request for the status of the workflow instance from a user tool, the status request include the command ID for the workflow instance;
an act of using the command ID to refer to the command record for the workflow instance;
an act of accessing the state of the workflow instance from the command record; and
an act of returning the status to the user tool.

10. The method as recited in claim 1, further comprising:

an act of detecting that the workflow instance has completed; and
an act of updating the command record to indicate the workflow instance completed such that completion of the workflow can be reflected in response to subsequent queries.

11. The method as recited in claim 1, further comprising:

an act of extending the processing of model based commands by adding or modifying a command or workflow model.

12. The method as recited in claim 1, further comprising:

an act of recording error information related to execution of the command in the repository; and
an act of user tools subsequently accessing the error information to reason about problems with the application model; and
an act of the user tools acting upon the problems to correct the problems.

13. At a computer system, the computer system including an executive service and a repository that stores models, a method for releasing a command definition model for further use, the method comprising:

an act of receiving a command request for a distributed application, wherein the command request includes a command reference to a command definition model that defines a corresponding command and that identifies a corresponding workflow model, wherein the command request also includes a reference to an application model for the distributed application that is a declarative model that describes the structure and behavior of the distributed application by defining general operations of one or more modules in the distributed application, and wherein the command request indicates that the corresponding command is to be implemented for the application model and for the distributed application;
an act of creating a command record for the received command request in the repository, wherein the command record stores information related to implementing the command request;
an act of creating an instance of the workflow from the corresponding workflow model, wherein the workflow model describes how to implement the received command request for the application model, and wherein the workflow instance includes a set of pre-defined activities configured to interoperate to implement the command request for the application model;
an act of locking the application model to prevent further access to the application model while the workflow instance implements the command request for the application model;
an act of submitting the application model reference to the workflow instance to initiate the workflow instance for the application model;
an act of recording information related to the behavior of the workflow instance within the command record as the workflow instance implements the command request for the application model for the application model;
an act of determining that the workflow instance is to be stopped;
an act of determining that the workflow instance is stopped; and
an act of releasing the lock on the application model such that other commands can be implemented for the application model.

14. The method as recited in claim 13, wherein the act of receiving a command request for a distributed application comprises an act of receiving a software lifecycle command for a distributed application.

15. The method as recited in claim 13, wherein the act of determining that the workflow instance is to be stopped comprises an act of determining that the workflow instance has completed.

16. The method as recited in claim 13, wherein the act of determining that the workflow instance is to be stopped comprises:

an act of receiving a subsequent cancel call requesting that the workflow instance be stopped prior to the workflow instance completing implementation of the command request for the model application; and
an act of issuing a stop request to the workflow instance.

17. The method as recited in claim 16, wherein the act of receiving a subsequent request to stop the workflow instance comprises:

an act of determining that the cancel call is not responding; and
an act of receiving a terminate call requesting that the workflow be terminated in response to the determination that the cancel call is not responding.

18. The method as recited in claim 1, wherein the application model comprises human-readable instructions.

19. The method as recited in claim 18, further comprising:

an act of the workflow instance interoperating with a work breakdown module to prepare the declarative application model for consumption by driver services by at least resolving one or more ambiguities in the human-readable instructions to translate the human-readable instructions into driver-specific instructions consumable by the driver services.
Referenced Cited
U.S. Patent Documents
4751635 June 14, 1988 Kret
5423003 June 6, 1995 Berteau
5602991 February 11, 1997 Berteau
5655081 August 5, 1997 Bonnell
5764241 June 9, 1998 Elliott
5809266 September 15, 1998 Touma
5893083 April 6, 1999 Eshghi
5913062 June 15, 1999 Vrvilo et al.
5937388 August 10, 1999 Davis et al.
5958010 September 28, 1999 Agarwal
6005849 December 21, 1999 Roach et al.
6026404 February 15, 2000 Adunuthula
6055363 April 25, 2000 Beals et al.
6070190 May 30, 2000 Reps
6167538 December 26, 2000 Neufeld et al.
6225995 May 1, 2001 Jacobs
6247056 June 12, 2001 Chou
6263339 July 17, 2001 Hirsch
6279009 August 21, 2001 Smirnov et al.
6330717 December 11, 2001 Raverdy
6334114 December 25, 2001 Jacobs
6336217 January 1, 2002 D'Anjou et al.
6415297 July 2, 2002 Leymann et al.
6477665 November 5, 2002 Bowman-Amuah
6618719 September 9, 2003 Andrei
6640241 October 28, 2003 Ozzie
6654783 November 25, 2003 Hubbard
6662205 December 9, 2003 Bereiter
6697877 February 24, 2004 Martin
6710786 March 23, 2004 Jacobs
6715145 March 30, 2004 Bowman-Amuah
6718535 April 6, 2004 Underwood
6801818 October 5, 2004 Kopcha
6847970 January 25, 2005 Keller et al.
6854069 February 8, 2005 Kampe
6886024 April 26, 2005 Fujita et al.
6907395 June 14, 2005 Hunt
6931644 August 16, 2005 Riosa
6934702 August 23, 2005 Faybishenko
6941341 September 6, 2005 Logston
7051098 May 23, 2006 Masters
7055143 May 30, 2006 Ringseth et al.
7065579 June 20, 2006 Traversat
7072807 July 4, 2006 Brown
7072934 July 4, 2006 Helgeson et al.
7079010 July 18, 2006 Champlin
7085837 August 1, 2006 Kimbrel
7096258 August 22, 2006 Hunt
7103874 September 5, 2006 McCollum et al.
7130881 October 31, 2006 Volkov et al.
7150015 December 12, 2006 Pace et al.
7155380 December 26, 2006 Hunt
7155466 December 26, 2006 Rodriguez
7162509 January 9, 2007 Brown et al.
7168077 January 23, 2007 Kim et al.
7174359 February 6, 2007 Hamilton, II et al.
7178129 February 13, 2007 Katz
7200530 April 3, 2007 Brown
7219351 May 15, 2007 Bussler et al.
7263689 August 28, 2007 Edwards et al.
7379999 May 27, 2008 Zhou et al.
7512707 March 31, 2009 Manapragada
7796520 September 14, 2010 Poustchi
7797289 September 14, 2010 Chan et al.
20020035593 March 21, 2002 Salim et al.
20020038217 March 28, 2002 Young
20020099818 July 25, 2002 Russell et al.
20020111841 August 15, 2002 Leymann et al.
20020120917 August 29, 2002 Abrari et al.
20020133504 September 19, 2002 Vlahos et al.
20020135611 September 26, 2002 Deosaran
20020147515 October 10, 2002 Fava et al.
20020147962 October 10, 2002 Hatanaka
20020198734 December 26, 2002 Greene
20030005411 January 2, 2003 Gerken
20030061342 March 27, 2003 Abdelhadi et al.
20030084156 May 1, 2003 Graupner et al.
20030135384 July 17, 2003 Nguyen
20030149685 August 7, 2003 Trossman
20030195763 October 16, 2003 Gulcu et al.
20040034850 February 19, 2004 Burkhardt
20040046785 March 11, 2004 Keller
20040078461 April 22, 2004 Bendich et al.
20040088350 May 6, 2004 Early
20040102926 May 27, 2004 Adendorff
20040148184 July 29, 2004 Sadiq
20040162901 August 19, 2004 Mangipudi
20040249972 December 9, 2004 White
20050005200 January 6, 2005 Matena et al.
20050011214 January 20, 2005 Schwetfuehrer
20050055692 March 10, 2005 Lupini
20050071737 March 31, 2005 Adendorff
20050074003 April 7, 2005 Ball
20050091227 April 28, 2005 McCollum et al.
20050120106 June 2, 2005 Albertao
20050125212 June 9, 2005 Hunt et al.
20050132041 June 16, 2005 Kundu
20050137839 June 23, 2005 Mansurov
20050155042 July 14, 2005 Kolb et al.
20050165906 July 28, 2005 Deo et al.
20050188075 August 25, 2005 Dias
20050216831 September 29, 2005 Guzik
20050261875 November 24, 2005 Shrivastava
20050268307 December 1, 2005 Gates et al.
20050278702 December 15, 2005 Koyfman
20050283518 December 22, 2005 Sargent
20060010142 January 12, 2006 Kim
20060010164 January 12, 2006 Netz
20060013252 January 19, 2006 Smith
20060036743 February 16, 2006 Deng
20060064460 March 23, 2006 Sugawara
20060070066 March 30, 2006 Grobman
20060070086 March 30, 2006 Wang
20060074730 April 6, 2006 Shukla et al.
20060074734 April 6, 2006 Shukla
20060095443 May 4, 2006 Kumar
20060123389 June 8, 2006 Kolawa et al.
20060123412 June 8, 2006 Hunt
20060155738 July 13, 2006 Baldwin
20060173906 August 3, 2006 Chu et al.
20060206537 September 14, 2006 Chiang
20060230314 October 12, 2006 Sanjar
20060235859 October 19, 2006 Hardwick
20060236254 October 19, 2006 Mateescu
20060265231 November 23, 2006 Fusaro et al.
20060277323 December 7, 2006 Joublin
20060277437 December 7, 2006 Ohtsuka et al.
20060294506 December 28, 2006 Dengler
20070005283 January 4, 2007 Blouin et al.
20070005299 January 4, 2007 Haggerty
20070006122 January 4, 2007 Bailey et al.
20070016615 January 18, 2007 Mohan et al.
20070033088 February 8, 2007 Aigner et al.
20070050237 March 1, 2007 Tien
20070050483 March 1, 2007 Bauer et al.
20070061776 March 15, 2007 Ryan et al.
20070067266 March 22, 2007 Lomet
20070088724 April 19, 2007 Demiroski
20070089117 April 19, 2007 Samson
20070094350 April 26, 2007 Moore
20070112847 May 17, 2007 Dublish
20070174228 July 26, 2007 Folting
20070174815 July 26, 2007 Chrysanthakopoulos et al.
20070179823 August 2, 2007 Bhaskaran
20070208606 September 6, 2007 Mackay et al.
20070233879 October 4, 2007 Woods
20070244904 October 18, 2007 Durski
20070245004 October 18, 2007 Chess
20070220177 September 20, 2007 Kothari
20070277109 November 29, 2007 Chen
20080005729 January 3, 2008 Harvey
20080010631 January 10, 2008 Harvey et al.
20080209414 August 28, 2008 Stein
20080244423 October 2, 2008 Jensen-Pistorius
20090049165 February 19, 2009 Long et al.
20090187662 July 23, 2009 Manapragada
20090265458 October 22, 2009 Baker
20100005527 January 7, 2010 Jeon
Foreign Patent Documents
1770510 April 2007 EP
0124003 April 2001 WO
WO0227426 April 2002 WO
2007072501 June 2007 WO
Other references
  • U.S. Appl. No. 12/105,083, filed Apr. 17, 2008.
  • Office Action dated Mar. 2, 2010 cited in U.S. Appl. No. 11/771,816.
  • Office Action dated Mar. 18, 2010 cited in U.S. Appl. No. 11/740,737.
  • Office Action dated Apr. 5, 2010 cited in U.S. Appl. No. 11/771,827.
  • Office Action dated 04/13/210 cited in U.S. Appl. No. 11/925,326.
  • U.S. Appl. No. 11/925,326, Mail Date Jul. 22, 2010, Notice of Allowance.
  • OSLO>Suite 2006, “OSLO Suite is the leading platform for designing, building and executing adaptive business solutions”, http://www.oslo-software.com/en/product.php.
  • Korb, John T., et al., “Command Execution in a Heterogeneous Environment”, 1986 ACM, pp. 68-74.
  • U.S. Appl. No. 11/844,177, filed Aug. 23, 2007, Sedukhin.
  • U.S. Appl. No. 11/740,737, filed Apr. 26, 2007, Sedukhin.
  • U.S. Appl. No. 11/771,827, Jun. 29, 2007, Sedukhin.
  • U.S. Appl. No. 11/771,816, filed Jun. 29, 2007, Sedukhin.
  • U.S. Appl. No. 11/925,326, filed Oct. 26, 2007, Christensen.
  • U.S. Appl. No. 11/925,680, filed Oct. 26, 2007, Sedukhin.
  • U.S. Appl. No. 11/925,591, filed Oct. 26, 2007, Sedukhin.
  • U.S. Appl. No. 11/925,067, filed Oct. 26, 2007, Sedukhin.
  • U.S. Appl. No. 11/925,184, filed Oct. 26, 2007, Voss.
  • U.S. Appl. No. 11/925,201, filed Oct. 26, 2007, Sedukhin.
  • U.S. Appl. No. 60/983,117, filed Oct. 26, 2007, Skierkowski.
  • Frecon, Emmanuel, et al., “DIVE: a scaleable network architecture for distributed virtual environments”, The British Computer Society, The Institution of Electrical Engineers and IOP Publishing Ltd, Mar. 6, 1998, pp. 91-100.
  • Baldi, Mario, et al., “Exploiting Code Mobility in Decentralized and Flexible Network Management”, Lecture Notes in Computer Science, vol. 1219, Proceedings of the First International Workshop on Mobile Agents, pp. 13-26.
  • Milenkovic, Milan, et al., “Towards Internet Distributed Computing”, Sep. 26, 2003, http://m.students.umkc.edu/mpshxf/TowardsIDC.pdf.
  • “Managing Complexity in Middleware”, by Adrian Colyer, Gordon Blair and Awais Rashid, IBM UK Limited, Hursley Park, Winchester, England and Computing Department, Lancaster University, Bailrigg, Lancaster, England, [online] [retrieved on Apr. 20, 2007], 6 pages. Retrieved from the Internet: http://222.aosd.net/2005/workshops/acp4is/past/asp4is03/papers/colyer.pdf.
  • “User Interface Declarative Models and Development Environments: A Survey”, by Paulo Pinheiro Da Silva, Department of Computer Science, University of Manchester, Manchester, England [online] [retrieved on Apr. 20, 2007], 20 pages. Retrieved from the Internet: http://www.cs.utep.edu/paulo/papers/PinheirodaSilvaDSVIS2000.pdf.
  • “Architecturing and Configuring Distributed Application with Olan”, by R. Balter, L. Bellissard, F. Boyer, M Riveill and J.Y. Vion-Dury, Middleware 98 Conference Report, INRIA, France, [online] [retrieved on Apr. 20, 2007], 15 pages. Retrieved from the Internet: http://www.comp.lancs.ac.uk/computing/middleware98/papers.html.
  • “A Load Balancing Module for the Apache Web Server”, Author Unknown, [online] [retrieved on Apr. 20, 2007], 9 pgs. Retrived from the Internet: http://www.backhand.org/ApacheCon2000/US/modbackhandcoursenotes.pdf.
  • “Performance Tuning and Optimization of J2ee Applications on the Jboss Platfom”, by Samuel Kounev, Bjorn Weis and Alejandro Duchmann, Department of Computer Science, Darmstadt University of Technology, Germany, [online] [retrieved on Apr. 20, 2007], 10 pgs. Retrieved from the Internet: http://www.cl.cam.ac.uk/˜sk507/pub/04-cmg-JBoss.pdf.
  • “Outlier Detection for Fine-Grained Load Balancing in Database Clusters”, by Jin Chen, Gokul Soundararjan, Madalin Mihailescu and Cristiana Amza, Department of Computer Science, Department of Electrical and Computer Engineering, University of Toronto, [online] [retrieved on Apr. 20, 2007], 10 pgs. Retrieved from the Internet: http://www.cs.toronto.edu/˜jinchen/papers/smdb07.pdf.
  • Dias, M. Bernardine, et al., “A Real-Time Rover Executive Based on Model-Based Reative Planning” The 7th International Symposium on Artificial Intelligence, Robotics and Automation in Space, May 2003.
  • Goble, Carole, et al., “Building Large-scale, Service-Oriented Distributed Systems using Semantic Models”, http://www.jisc.ac.uk/media/documents/programmes/capital/gridstandardsaboveogsa.pdf, 21 pages.
  • Robinson, William N., “Implementing Rule-based Monitors within a Framework for continuous Requirements Monitoring” Proceedings of the 38th Hawaii International Conference on System Sciences, 2005 IEEE, 10 pages.
  • Maghraoui, Kaoutar El, et al., “Model Driven Provisionings: Bridging the Gap Between Declarative Object Models and Procedural Provisioning Tools”, http://wcl.cs.rpi.edu/papers/middleware06.pdf.
  • Von, Vorgelet, et al., “Dynamic Upgrade of Distributed Software Components”, 2004, 191 pages.
  • Poslad, Stefan, et al., “The FIPA-OS agent platform: Open Source for Open Standards”, Apr. 2000, 17 pages.
  • Software News, “Progress Software Extends Lead in Distributed SOA” 2007, 6 pages.
  • Eidson, Thomas M., “A Component-based Programming Model for Composite, Distributed Applications”, Institute for Computer Applications in Science and Engineering Hampton, VA, May 2001, 1 page.
  • Bauer, Michael A., “Managing Distributed Applications and Systems: An Architectural Experiment”, Jan. 31, 1997, 46 pages.
  • Tawfik, Sam, “Composite applications and the Teradata EDW”, Extend the capabilities of your enterprise data warehouse with supporting applications, Teradata Magazine online, Archive: vol. 6, No. 4, Dec. 2006, 3 pages.
  • Alpern, Bowen, et al, “PDS: A Virtual Execution Environment for Software Deployment”, 2005, pp. 175-185.
  • Talcott, Carolyn L., MTCoord 2005 Preliminary Version, “Coordination Models Based on a Formal Model of Distributed Object Reflection”, 13 pages.
  • Leymann, F., et al., “Web Services and Business Process Management”, IBM Systems Journal, vol. 41, No. 2, 2002, New Developments in Web Services and E-commerce, 11 pages.
  • Ivan, A.-A, et al., “Pardonable services: A framework for seamlessly adapting distributed applications to heterogeneous environments”, High Performance Distributed Computing, 2002. HPDC-11 2002. Proceedings. 11th IEEE International Symposium, 1 page.
  • Urban, Susan D., et al., “Active Declarative Integration Rules for Developing Distributed Multi-Tiered Applications”, 3 pages.
  • Bischoff, Urs, et al., “Programming the Ubiquitous Network: A Top-Down Approach” System Support for Ubiquitous Computing Workshop (UbiSys'06), Orange County, USA, Sep. 2006, 8 pages.
  • Albrecht, Jeannie, et al., “Remote Control: Distributed Application Configuration Management, and Visualization with Plush”, Proceedings of the Twenty-first USENIX Large Installation System Administration Conference (LISA), Nov. 2007, 16 pages.
  • Office Action dated Sep. 14, 2009 cited in U.S. Appl. No. 11/740,737.
  • Office Action dated Oct. 14, 2009 cited in U.S. Appl. No. 11/771,827.
  • Office Action dated Oct. 1, 2009 cited in U.S. Appl. No. 11/771,816.
  • Nastel Technologies, Inc., “AutoPilot Business Dashboard Configuration and User's Guide Version 4.4”, 2006, AP/DSB 440.001, 82 pages.
  • TIBCO the Power of Now, “TIBCO BusinessFactor”, 2006, 2 pages.
  • TIBCO, http://www.tibco.com/software/businessactivitymonitoring/businessfactor/default.jsp, Copyright 2000-2007, 2 pages.
  • “Factal:Edge Enlists CMLgroup to Bring Visualization to Business Performance Management Clients”, http://extranet.fractaledge.com/News/PressReleases/2006/060829, 2006, 2 pages.
  • U.S. Appl. No. 11/925,184, mail date Jan. 14, 2011, Office Action.
  • U.S. Appl. No. 11/740,737, mail date Sep. 13, 2010, Office Action.
  • Shaojie Wang, Synthesizing Operating System Based Device Drivers in Embedded Systems, 2003.
  • U.S. Appl. No. 11/771,827, mail date Nov. 29, 2010, Notice of Allowance.
  • U.S. Appl. No. 11/925,067, mail date Dec. 6, 2010, Notice of Allowance.
  • U.S. Appl. No. 11/740,737, mail date Feb. 10, 2011, Office Action.
Patent History
Patent number: 7974939
Type: Grant
Filed: Oct 26, 2007
Date of Patent: Jul 5, 2011
Patent Publication Number: 20090112873
Assignee: Microsoft Corporation (Redmond, WA)
Inventors: Karthik Arun Nanjangud Bhaskar (Kirkland, WA), Erik B. Christensen (Seattle, WA), Amol Sudhakar Kulkarni (Bothell, WA), Prasad Sripathi Panditharadhya (Sammamish, WA), Sundeep Sahi (Seattle, WA), Igor Sedukhin (Issaquah, WA), Haoran Andy Wu (Sammamish, WA)
Primary Examiner: James Trujillo
Assistant Examiner: Linh Black
Attorney: Workman Nydegger
Application Number: 11/925,079
Classifications