EXECUTION OF A WORKFLOW THAT INVOLVES APPLICATIONS OR SERVICES OF DATA CENTERS

A service exchange includes an orchestrator to execute a workflow that involves a plurality of applications and services of a plurality of data centers. A message broker is to exchange messages between the orchestrator and the applications. Adapters are to perform protocol and interface translations for information communicated between at least some of the applications and the message broker.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit under 35 U.S.C. §119(e) of U.S. Provisional Application No. 61/913,799, filed Dec. 9, 2013, which is hereby incorporated by reference.

BACKGROUND

An enterprise may employ multiple applications to perform various tasks. The tasks can be performed by various applications, and in some cases multiple applications can perform overlapping tasks. As an example, the tasks can include tasks associated with information technology (IT) management, such as management of development and production of program code, management of a portfolio of products or services, support management, IT service management, cloud and Software as a Service (SaaS) service management and so forth. IT management performs management with respect to components of an IT environment, where the components can include computers, storage devices, communication nodes, machine-readable instructions, and so forth. Various aspects of IT management can be modeled by an Information Technology Infrastructure (ITIL) (that provides a set of best practices for IT management), a Business Process Framework (eTOM) from the TM Forum, and so forth. With advancements in IT management technology, new IT management processes have been introduced, such as self-service IT, IT as a service provider, DevOps and autonomous IT, and so forth.

BRIEF DESCRIPTION OF THE DRAWINGS

Some implementations are described with respect to the following figures.

FIG. 1A is schematic diagram of an example arrangement including data centers according to some implementations.

FIG. 1B is a block diagram of a gateway according to some implementations.

FIG. 2 is a block diagram of a service exchange according to some implementations.

FIG. 3 is a block diagram of a service exchange that interacts with a legacy integration framework, according to further implementations.

FIG. 4 is a flow diagram of a process of a service exchange according to

FIG. 5 is a block diagram of an example computer system according to some implementations.

DETAILED DESCRIPTION

Workflows performed by an enterprise can involve the use of a number of applications. An “enterprise” can refer to a business concern, an educational organization, a government agency, an individual, or any other entity. A “workflow” can refer to any process that the enterprise can perform, such as a use case. Such a process of the workflow can also be referred to as an “end-to-end process” or an “enterprise process” since the process involves a number of activities of the enterprise from start to finish. A “use case” can refer to any specific business process or other service that an enterprise desires to implement. An “application” can refer to machine-readable instructions (such as software and/or firmware) that are executable. The application can include logic associated with an enterprise process, which can implement or support all or parts of the enterprise process (or processes). An application can be an application developed by the enterprise, or an application provided by an external vendor of the enterprise. An application can be provided on the premises of the enterprise, or in the cloud (public cloud or virtual private cloud), and the application can be a hosted application (e.g. an application provided by a provider over a network), a managed service (a service managed and/or operated by a third party that can be hosted or on premise), or a software as a service (SaaS) (a service available on a subscription basis to users), and so forth. In some cases, multiple applications used by the enterprise may be provided by different vendors.

Within a portfolio of applications used by an enterprise, many applications may not be able to directly interact with each other. In general, an application implements a particular set of business logic and is not aware of other applications that are responsible for performing other processes. The design of the application may or may not have taken into account the presence of other applications upstream or downstream (with respect to an end-to-end process). This is especially true for older (legacy) applications. More recently, applications can at least expose well defined application programming interfaces (APIs) that assume that the applications will be interacting with other systems. Such applications are called by their APIs or can call other APIs. Even with such APIs, applications may not readily interact with each other. Different applications may employ different data formats, different languages, different interfaces, different protocols, and so forth.

Application developers have developed a portfolio of applications that rely on using point-to-point integration to provide some level of integration across the portfolio. With point-to-point integration, a given application is aware of another application in the portfolio that is upstream or downstream of the given application. Such applications are mutually aware of each other.

A point-to-point integration mechanism can include a component (or multiple components) provided between applications to perform data transformations, messaging services, and other tasks to allow the applications to determine how and when to communicate and interact with each other.

Different point-to-point integration mechanisms can be provided for different subsets of applications. If there are a large number of applications in a portfolio of applications used by an enterprise, then there can be a correspondingly large number of point-to-point integration mechanisms.

As applications evolve (e.g. new release of an application, new functionality added to an application, variation of the expected use cases, variation of interaction to take place between applications), corresponding point-to-point integration mechanisms may have to be modified and/or re-tested. Modifying or re-testing an integration mechanism between applications can be a time-consuming and costly exercise, particularly if there are a large number of integration mechanisms deployed by the enterprise. This exercise can rapidly become a complex combinatorial exercise. If point-to-point integration is used, an enterprise may be hesitant to upgrade applications, to add new applications, to change application vendors, or to modify processes, since doing so can be complex and costly. However, maintaining a static portfolio of applications can prevent an enterprise from being agile in meeting evolving demands by users or customers of the enterprise. If an enterprise has applications provided by multiple vendors, additional challenges may arise. The application can be built to support updated releases of other applications, which adds complexity to application development if an enterprise wishes to deploy another release of an application of another vendor.

In accordance with some implementations of the present disclosure, a service exchange and integration framework (referred to as a “service exchange” in the ensuing discussion) is provided that is able to integrate applications in a flexible manner, and orchestrate execution of workflows (which can refer to enterprise processes or use cases as noted above). Applications are used to implement their respective logic parts of each workflow. These applications are orchestrated to automate the end-to-end enterprise process or use case.

According to the present disclosure, orchestrating execution of a workflow can refer to modeling and executing the logic of sequencing of the tasks of the workflow. Some of the tasks of the workflow are delegated using the orchestration to be performed by the logic of the applications. As an example, a workflow can include an order fulfillment workflow. An order fulfillment workflow can include the following tasks: receive an order from a customer, determine applications that are to be involved in fulfilling the order, invoke the identified applications to fulfill the order, and return a status (e.g. confirmation number or information to further manage the order, such as to view, update, cancel, or repeat the order) to the customer. Note that the foregoing example order fulfillment workflow is a simplified workflow that includes a simple collection of tasks. An actual order fulfillment workflow may involve many more tasks.

In some cases, a workflow can involve processes of applications in multiple data centers or in the cloud or over the internet. A “data center” can refer to an arrangement of resources (including computing resources such as computers or processors, storage resources to store information, communication resources to communicate information, and machine-executable instructions such as applications, operating systems, and so forth). A data center can be provided by an enterprise. A data center can also be a public cloud, a private cloud, or a hybrid cloud that is made up of a public cloud and a private cloud. A public cloud can be provided by a provider that is different from the enterprise. A private cloud can be provided by the enterprise. A data center is “provided” by a provider (enterprise or other provider) if the provider manages the resources of the data center and/or makes available the resources of the data center to users, machines, and/or program code. Multiple data centers can be coupled over a private network and/or over a public network such as the Internet.

A “cloud” can refer to an infrastructure including resources that are available for use by users. The resources of a public cloud are available over a public network, to multiple tenants (or customers), who are able to subscribe or rent some share of the public cloud resources. In some cases, a public cloud provided by a third party provider can also be deployed on the premises of the enterprise.

The resources of a private cloud are dedicated for use by users within organizations of the enterprise. A cloud can also be a hybrid cloud, which includes both a public cloud and a private cloud. Another type of cloud is a managed cloud, which includes resources of the enterprise that are managed by a third party provider.

More traditionally, an enterprise can deploy multiple data centers, such as in different geographic regions (e.g. across a city, a state, a country, or the world) to achieve redundancy, high availability (to ensure availability of resources or to provide disaster recovery in case of failure of a data center), or scalability (increasing resources to meet increased demand). Deployment of multiple data centers can also be for satisfying government regulations (e.g. regulation specifying that certain data has to be kept in a specific country). These data centers can be managed by the enterprise or by a third party provider. A data center can also provide services such as Software as a Service (SaaS) services. SaaS can refer to an arrangement in which software (or more generally, machine-executable instructions) is made available to users on a subscription basis.

Orchestrating workflows across different data centers can be associated with various challenges. For example, use of different data centers may involve communications through many firewalls. As another example, the data centers can be coupled over a network (such as the Internet) that can be associated with unexpected delays, packet losses, etc., particularly during times of high usage. In such cases, guaranteeing the satisfaction of target goals associated with a service level agreement (SLA) or quality of service (QoS) level can be difficult. Also, managing security can be more complex. In addition, if cloud resources and/or SaaS services are employed, instances of applications that are to be orchestrated can be dynamically created, moved, replaced, and so forth, which can involve the use of dynamic addressing; as a result, it can be more difficult to address such application instances.

The foregoing issues also exist when attempting to broker messages reliably and with desired performance in a manageable manner across the Internet or among clouds. Legacy integration frameworks, such as Enterprise Service Bus (ESB) integration frameworks, also experience the foregoing challenges. A legacy integration framework can refer to an integration framework different from that provided by the service exchange according to the present disclosure. ESB refers to an architecture model for designing and implementing communication between mutually interacting applications in a service-oriented architecture (SOA), where the applications can be distributed usually within a data center. While theoretically it is also possible to distribute application across the Internet or among clouds, that can be associated with issues relating to changing use cases and routing, dynamic addressing, and message delivering in a manageable manner across Internet or data centers. The ESB framework provides for monitoring and control of routing of message between applications, resolving contention between applications, and other tasks.

The service exchange according to some implementations of the present disclosure is able to interact with an ESB integration framework or another framework, e.g. with a message queue for exchanging messages among applications that would already be present to integrate the applications.

In accordance with some implementations, techniques or mechanisms enable the orchestrated execution of applications across multiple data centers. Additionally, the service exchange according to some implementations enable cloud scale message brokering (to allow an exchange of messages across clouds) or a cloud event driven architecture (EDA). An EDA refers to a framework that orchestrates behavior around the production, detection and consumption of events as well as the responses the events evoke. A cloud EDA refers to such a framework implemented across clouds.

FIG. 1A illustrates an example arrangement that includes a data center 100, a data center 102, and a data center 104. The data centers 100, 102, and 104 can be enterprise data centers and/or clouds as discussed above. Although just three data centers are shown in FIG. 1A, it is noted that in other examples, different numbers of data centers can be provided.

The data center 100 includes a service exchange 110, which includes an orchestrator 112, a message broker 114, and adapters 116. The adapters 116 are provided between the message broker 114 and respective applications 118. Although the applications 118 are depicted as being part of the service exchange 110 in FIG. 1A, it is noted that in other examples, the applications 118 can be separate from the service exchange 110, and some applications 118 can even be external of the data center 100. For example, some applications 118 can be provided by an entity that is separate from the provider of the data center 100.

Each of the orchestrator 112, message broker 114, and adapters 116 can be implemented as a combination of machine-executable instructions and processing hardware, such as a processor, a processor core, an application-specific integrated circuit (ASIC) device, a programmable gate array, and so forth. In other examples, any of the orchestrator 112, message broker 114, and adapters 116 can be implemented with just processing hardware.

The message broker 114 is operatively or communicatively coupled to the orchestrator 112 and the adapters 116. Generally, the message broker 114 is used to exchange messages among components, including the orchestrator 112 and the adapters 116. A message can include any or some combination of the following: a call (e.g. API call) or an event (e.g. response, result, or other type of event). The message broker 114 is responsible for ensuring that API calls and events (e.g. responses, results, etc.) are sent to the correct adapter or to the correct workflow instance (multiple workflow instances may execute concurrently). Alternatively, the endpoints (adapters and workflow instances) may all receive a call or event and make a decision regarding whether each endpoint should process the call or event.

The message broker 114 further includes a message confirmation engine (MCE) 119 to perform the following tasks. The message confirmation engine 119 ensures that a message put on the message broker 114 is delivered to a target by checking for a confirmation of receipt of the message by the target (e.g. an adapter 116), such as with a positive acknowledgement, for example. Message confirmation is thus implemented with the message broker 114 and the adapters 116. If the target does not confirm receipt of the message, the message confirmation engine 119 can cause the message broker 114 to resend the message to the target.

The message confirmation engine 119 can also ensure that the target processes the message (e.g. by checking that the target returns a confirmation of commit). The confirmation of commit is an indication of successful completion of processing of the message. An application can send the confirmation of commit, or alternatively, an adapter 116 can query the application for the confirmation of commit. If the confirmation of commit is not received, then the message confirmation engine 119 can cause the message broker to resend the message, or to indicate an error, depending on the type of message and application/flow design. Idempotent calls on the applications can be repeated as often as appropriate until commit is confirmed. When the calls cannot be repeated, error messages are sent and the workflow handles (in its logic) what to do to perform rollback or notification. Rollback can refer rolling back the workflow to a prior known good state. Notification can include notifying a management system (or management systems). The action to take in response to a lack of confirmation of commit can be determined from a canonical data model 117 (discussed further below).

The message confirmation engine 119 can also ensure that messages are delivered in a managed manner (managed delivery of messages); in other words, messages are delivered without loss and with acceptable delays (delays within specified target levels). The message confirmation engine 119 can also perform remediation if message loss or delays occur. Delivery times for messages can be monitored, and messages that are lost or excessively delayed (delayed longer than a specified target goal) are re-sent. Remediation can include resending the message that is missing or delayed to allow the endpoint to not have to wait anymore. Remediation can also include notifying other systems such as network or traffic management systems to try to get more or better or alternate bandwidth, for example.

The message confirmation engine 119 can also perform secure communication of messages with endpoints, by applying security to the messages. Applying security can include encryption of messages, mutual authentication of messages, or use of certificates. Encryption of a message is accomplished by using a key (e.g. public key or private key) to encrypt the message. Mutual authentication refers to the two communicating endpoints authenticating each other, such as with use of credentials or other security information. A certificate can be used to establish a secure communication session between two endpoints.

To manage similar issues for communication of messages with the other data centers 102 and 104, a gateway 142 can also be provided in the data center 100. The gateway 142 is discussed further below, following the discussion of operations of the orchestrator 112, the message broker 114, and the adapters 116. If the other data center does not support deployment of a gateway and service exchange, then the gateway on the enterprise service exchange side can only do as much as it can with available protocols. In alternative examples, the remote cloud can support the same protocols or mechanisms of the gateway, or a gateway and service exchange of a remote data center that is geographically close to or collocated with the remote data center can be used to interact with the cloud.

In addition, the message broker 114 is able to send a confirmation of successful completion of an application in a workflow to the orchestrator 112 or to a requester that initiated the workflow.

The orchestrator 112 is used to orchestrate the execution of a specific workflow 113 that involves tasks performed by multiple applications (e.g. a subset or all of applications 118). To perform a workflow, flow logic can be loaded into the orchestrator 112, and the flow logic is executed by the orchestrator 112. “Flow logic” can include a representation of a collection of tasks that are to be performed. The flow logic can be in the form of program code (e.g. a script or other form of machine-executable instructions), a document according to a specified language or structure (e.g. Business Process Execution Language (BPEL),a Business Process Model and Notation (BPMN), etc.), or any other type of representation (e.g. Operations Orchestration from Hewlett-Packard, YAML Ain't Markup Language (YAML), Mistral from OpenStack, etc.). The flow logic can be generated by a human, a machine, or program code, and can be stored in a machine-readable or computer-readable storage medium accessible by the orchestrator 112.

The orchestrator 112 is able to execute multiple flow logic to perform respective workflows. Multiple workflows and workflow instances (instances of a particular workflow refer to multiple instantiations of the particular workflow) can be concurrently executed in parallel by the orchestrator 112.

The orchestrator 112 is able to evaluate (interpret or execute) a flow logic, and perform tasks specified by the flow logic in response to a current state of the workflow and calls and events received by the orchestrator 112. A workflow can be a stateful workflow. As a stateful workflow is performed by the orchestrator 112, the orchestrator 112 is able to store a current state of the workflow, to indicate the portion of the workflow already executed. Based on the workflow's current state and a received event, the orchestrator 112 is able to transition from a current state to a next state of the workflow and can determine a next action to perform, where the next action may involve the invocation of another application. Whenever the orchestrator 112 receives a new call or event (e.g. response, results, or other event), the orchestrator 112 evaluates which workflow instance is to receive the call or event and loads the workflow instance with a correct state. In some cases, it is possible that multiple workflow instances may check if they are supposed to be a recipient of a call or event.

In other examples, a workflow can be a stateless workflow, which does not keep track of a current state of the workflow. Rather, the stateless workflow performs corresponding next steps or actions as events are received by the orchestrator 112. Use of a stateless workflow is generally suitable for asynchronous operation (discussed further below). A stateful workflow can be used with both a synchronous operation and asynchronous operation.

The events (e.g. results, responses, etc.) received by the orchestrator 112 can be provided by applications that are invoked in the workflow or from another source, such as through an interface 115 (e.g. an application programming interface (API)) of the message broker 114. The message broker 114 can also direct an event to a particular workflow instance (note that there can be multiple workflow instances executing concurrently). If the workflow instance is a stateful workflow, then an event can be provided to a state of the workflow.

An external entity can communicate with the message broker 114 using the API 115, such as to trigger a workflow (enterprise process or use case) or make progress (or step through) the workflow. The API 115 of the message broker can also be used to communicate a status update of a workflow.

The message broker 114 can include queues for temporarily storing information to be forwarded target components, and can include information forwarding logic that is able to determine a destination of a unit of information based on identifiers and/or addresses contained in the unit of information.

In some examples, the message broker 114 can employ an Advanced Message Queuing Protocol (AMQP), which is an open standard application layer protocol for message-oriented middleware. AMPQ is described in a specification provided by the Organization for the Advancement of Structured Information Standards (OASIS). An example of a message broker that employs AMPQ is RabbitMQ, which is an open source message broker application.

In other examples, other types of message brokers that employ other messaging or information exchange protocols can be used.

The information exchanged using the message broker 114 can include information sent by the orchestrator 112, where the information sent by the orchestrator 112 can include applications calls and/or data. An “application call” can refer to a command (or commands) or any other type of message that is issued to cause an instance of a respective application to execute to perform a requested task (or tasks).

The information exchanged using the message broker 114 can also include information sent by the applications. For example, the information sent by an application can include response information that is responsive to a respective application call. The information sent by the applications can also include information sent autonomously by an application without a corresponding request from the orchestrator 112. Information from an application can be included in an event sent by the application, where an “event” can refer to a representation of a unit of information. The event can include a response, a result, or any other information. Note that an event from an application can be in response to a synchronous call or asynchronous call. A synchronous call to an application by the orchestrator 112 is performed for a synchronous operation. In a synchronous operation, a workflow waits for a response to be received before proceeding further (in other words, the workflow blocks on the response). An asynchronous operation of a workflow refers to an operation in which the workflow does not wait for a response from an application in response to a call to the application.

In other examples, an event from an application can be due to something else occurring at the application level or in the environment (e.g. a support agent closes a ticket when using the application). Such an event can be sent to the workflow, such as the workflow for an incident case exchange use case (explained further below).

An event or call can also be received through the API 115 of the message broker 104 from another source.

The message broker 114 is able to respond to a call (such as an API call from the orchestrator 112 by making a corresponding call to the API of the respective instance of an application that is executing in a particular workflow instance. Adapters 116 may register with the message broker 114, and the message broker 114 can use the registration to determine how to direct a call, and how events (e.g. results, responses, etc.) are tagged or associated to a workflow instance. In some cases, it is possible that a message (a call or event) may be addressed to several workflow instances, in which case the message broker 114 can direct the message to the several workflow instances.

When performing a workflow based on flow logic executed by the orchestrator 112, the orchestrator 112 can issue application (synchronous or asynchronous) calls to the message broker 114 for invoking the applications at corresponding points in the workflow. A call can also be made by the orchestrator as part of throwing an event (which refers to the workflow deciding to communicate the event as a result of some specified thing occurring).

The flow logic for a respective workflow can be written abstractly using a canonical data model (CDM) 117. Although the canonical data model 117 is depicted as being inside the message broker 114, it is noted that the canonical data model 117 can be separate from the message broker 117 in other examples.

The canonical data model 117 can be used to express application calls to be issued by the orchestrator 112 to the message broker 114. The canonical data model 117 can also be used to express arguments (e.g. messages) for use in the calls, as well as the logic to be performed. The application calls can be abstract calls. The canonical data model 117 can be expressed in a specific language, such as a markup language or in another form.

More generally, a flow logic is written according to the canonical data model 117 can represent the following: arguments that are being exchanged in interactions of the applications, the functions that are called to support the interactions, the events (e.g. responses, results, or other events) that can result, any errors that can arise, and states of the use case executed across the applications. In general ad-hoc data models can be used but they may change whenever a new use case is introduced or when an application changes. According to implementations of the present disclosure, the canonical data model 117 can be been defined across a large number of use cases representative of the relevant interactions that can take place in a particular domain (such as IT management or another domain) and across a wide set of applications that can be used to support subsets of the use cases. Thus, in general, a canonical data model can be shared across use cases of a particular domain. A different canonical data model can be used for use cases of another domain. If a use case involves applications in different domains, then a canonical data model can be expanded to support the other domain, or multiple canonical data models may be used.

The information representing interactions between applications and the information representing the states of the applications can be used to track a current state of a workflow (assuming a stateful workflow). The information regarding the errors in the canonical data model 117 can be used for handling errors that arise during execution of the applications. The information regarding the errors can be used to map an error of an application to an error of the workflow that is being performed by the orchestrator 112

By using the canonical data model 117, the development of flow logic that is valid across large sets of applications can be achieved. Sharing a data model across the flow logic can facilitate combining the flow logic and/or customizing the flow logic, and also allows for adapters to be changed or modified to replace applications.

In other implementations, the service exchange 110 does not employ the canonical data model 117, but rather development of the flow logic can be ad-hoc (such as by use of the ad-hoc models noted above) for each use case and/or set of applications.

The application calls issued by the orchestrator 112 can be sent through an interface between the orchestrator 112 and the message broker 114. In this way, the expression of the flow logic does not have to be concerned with specific data models or interfaces employed by the applications, which simplifies the design of the orchestrator 112.

Also, the orchestrator 112 does not have to know specific locations of the applications—the applications can be distributed across multiple different systems in disparate geographic locations. The message broker 114 is responsible for routing the application calls to the respective adapters 116.

Information communicated between the message broker 114 and the adapters 116 is also in an abstract form according to the canonical data model. For example, the message broker 114 can forward an abstract application call from the orchestrator 112 to a respective adapter. Similarly, an adapter can send an event from an application to the message broker in an abstract form according to the canonical data model.

The adapters 116 perform protocol translations between the protocol of the abstract API of the message broker 114, and the protocols to which the interfaces exposed by the corresponding applications are bound. As an example, the protocol of the abstract API of the message broker 114 can be according to a Representational State Transfer (REST) protocol or some other protocol. The protocol of an interface exposed by an application can include Simple Object Access Protocol (SOAP), Remote Procedure Call (RPC), Session Initiation Protocol (SIP), and so forth.

Each adapter 116 can also transform the data model of a message (e.g. message carrying an event) and an abstract API call to the data model and specific API call exposed by a particular application (e.g. instance or release of the particular application). Stated differently, the adapter 116 performs interface adaptation or interface translation by converting the abstract message or abstract API to a message or API call that conforms to the API of the target application. The reverse conversion is performed in the reverse direction, where the result, response, event, message or API call from an application is converted to an abstract message or abstract API call that can be passed through the message broker 104 to the orchestrator 112.

Each adapter 116 can also perform address translation between an address in the address space used by the orchestrator 112 and the message broker 114, and an address in the address space of an application.

The service exchange 110 provides for a multi-point orchestrated integration across multiple applications. The multi-point orchestrated integration can include the applications 118 associated with the data center 100 as well as applications or services of the other data centers 102 and 104.

In the example according to FIG. 1A, the workflow 113 executed by the orchestrator 112 of the service exchange 110 can thus involve both processes of applications 118 associated with the data center 100, as well as processes of applications of the data center 102 and services 140 (e.g. SaaS services) in the data center 104.

The data center 102 also includes a service exchange 130 that can have a similar arrangement as the service exchange 110 of the data center 100.

The service exchange 130 can include or be associated with applications 138. Execution of the applications 138 can provide the services 120 of the data center 102 shown in FIG. 1A.

The service exchange 130 includes an orchestrator 132 that can execute a workflow 133, a message broker 134 that includes a message confirmation engine (ME) 139 and a canonical data model 137 (similar to the message confirmation engine 119 and the canonical data model 117 in the message broker 114), and adapters 136. The message broker 134 also has an interface 135 similar to the interface 115 of the message broker 114.

The workflow 133 executed by the orchestrator 132 in the service exchange 130 of the data center 102 can also involve applications and services across multiple data centers.

In some examples, the data center 104 does not include a service exchange similar to service exchange 110 or 130, but instead includes a different infrastructure for deploying the services 140. In general, a service exchange does not exist in any data center (including services and/or applications to be orchestrated) that is provided by a provider different from the enterprise that provides the data centers 100 and 102, for example. In such situations, techniques as discussed further above where the remote data center is without a gateway can be applied.

In examples according to FIG. 1A, an orchestrator 112 or 132 can orchestrate execution of a workflow that includes selected applications or services, including the applications 118, the applications 138, and the SaaS services 140.

As further shown in FIG. 1A, the data center 100 and data center 102 each includes a respective gateway 142 and 144. Although the gateways 142 and 144 are shown outside the respective service exchanges 110 and 130, it is noted that the gateways 142 and 144 can also be considered to be part of the respective service exchanges 142 and 144. Each gateway 142 or 144 includes a bridge between communications of the respective service exchange 110 or 130 (more specifically the communications of the respective message broker 114 or 134) and communications over a network 146, which can be a public network such as the Internet.

Communications over the network 146 can be according to a specified protocol, such as the Hypertext Transfer Protocol (HTTP), WebSocket protocol, Representational State Transfer (REST) protocol, or any other protocol.

As further shown in FIG. 1B, each gateway 142 or 144 includes a protocol translator 150 that can convert between the protocol used by the respective message broker 114 or 134, and the protocol used over the network 146.

For orchestration of applications within a single data center (such as 100 or 102), information exchange can be accomplished using the respective message broker 114 or 134, without involving the gateway 142 or 144. Moreover, for communications within just one data center, packet loss and delay and/or security of messages may not be a concern, since the communications occur within the same data center.

However, for communications over among different data centers (e.g. across clouds) or over the Internet, then message loss and delay and/or message security can become a concern. The gateways 142 and 144 are provided to address the foregoing issues and possibly issues associated with confirmation of message delivery and message processing commit, as discussed above.

In some examples, the message confirmation engine 119 or 139 of the message broker 114 or 134 can be used to ensure that a message is delivered to a target by checking for confirmation of receipt of the message by the target, and to ensure that the target has returned a confirmation of commit. In other examples, the gateway 142 or 144 can include a message confirmation engine 154 to perform the foregoing tasks, and can perform resending of a message in response to not receiving a confirmation of receipt of the message, and resending the message or indicating an error in response to not receiving a confirmation of commit.

As further discussed above, the message confirmation engine 119 or 139 in the message broker 114 or 134 can also ensure that messages are delivered in a managed manner; in other words, messages are delivered without loss and with acceptable delays (delays within specified target levels). The message broker 114 or 134 can also perform remediation if message loss or delays occur. Delivery times for messages can be monitored, and messages that are lost or excessively delayed (delayed longer than a specified target goal) are re-sent. Also, management systems can be notified as appropriate.

The message confirmation engine 154 in the gateway 142 or 144 can perform managed delivery of messages in lieu of or in addition to that performed by the message confirmation engine 119 or 139 of the message broker 114 or 134.

As noted above, the message confirmation engine 119 or 139 can also perform secure communication of messages with endpoints, such as by employing encryption of messages, mutual authentication of messages, or use of certificates.

In addition to or in lieu of performing security for messages by the message broker 114 or 134 across the network 146, a security engine 152 of the gateway 142 or 144 can perform the respective tasks, which can include message encryption, mutual authentication, or security using a certificate.

The gateway 142 or 144 can implement protocol changes and capabilities to perform the foregoing. For example, the gateway 142 or 144 can implement a mechanism to number and timestamp messages or packets (carrying the messages) that are sent to target endpoints over the network 146. For example, sequence numbers that monotonically increase can be assigned to messages (or packets) as they are sent to the target endpoint. The sequence numbers can be used to identify which data units (message or packets) were not received (i.e. lost). The gateway 142 or 144 can also determine the time for delivery and receipt of messages sent to the target endpoints. If the time taken to deliver a data unit (message or packet) exceeds a target goal, then the data unit can be re-sent. In some examples, bi-directional Hypertext Transfer Protocol (HTTP) communications (such as according to the WebSocket protocol) can be established between gateways 142 and 144 with packet numbering and timestamps (that indicate when a packet was sent). The bi-directional HTTP communications include a return channel through which a receiver is able to provide feedback regarding delays or data loss. The WebSocket protocol supports adding extensions such as packet numbers and timestamps in a standardized manner.

For the data center 104, managed communications of messages can be accomplished in the following manner. In some examples, the SaaS services 140 can expose APIs and protocols that are compatible with the use of packet numbering and timestamping by the gateways 142 and 144. For example, the SaaS services 140 can employ a Websocket protocol that employs packet numbering and timestamps, or some other mechanism. In this way, the gateways 142, 144 can interact with the SaaS services 140 to perform managed delivery of messages.

In other examples, managed delivery of messages with the SaaS services 140 is accomplished using just the gateway 142 or 144 on one side. If the remote data center 104 does not support deployment of a gateway and service exchange (e.g. because the data center belongs to other entity), then the gateway 142 on the enterprise service exchange side can only do as much as it can with available protocols. In alternative examples, the remote data center 104 can support the same protocols or mechanisms of the gateway 142. Examples of implementing same behavior at the SaaS level would be if the SaaS APIs were bounded to WebSocket with same extensions. In other examples, a gateway and service exchange of another data center that is geographically close to or collocated with the data center 104 can be used to interact with the data center 104.

To implement security, the gateway 142 or 144 can perform security-related tasks for messages in addition to or in lieu of the security-related tasks performed by the message broker 114 or 134 discussed above. As discussed above, these security-related tasks include encryption of messages, mutual authentication of messages, or use of certificates. For communications with the SaaS services 140 where just one gateway 142 or 144 is present, then encryption and authentication as supported by SaaS services 140 can be employed.

In other examples, a respective gateway can also be included in the data center 104.

The latency associated with communications over the network 146 can cause delays in the use case progress and impact the user experience and QoS experienced. Also, faults or errors in the network 146 may cause certain information to be lost, so that reliable communications may not be readily available over the network 146. A workflow provided by the service exchange 110 or 130 may be associated with a target performance goal, or more simply, a target goal. Examples of a target goal can include any of the following: target maximum response time from a request, target maximum usage of resources, target maximum error rate, and so forth. The target goal can be specified in a service level agreement (SLA) and can specify a maximum allowable delay between a call from the orchestrator and a response from a target application or service. In other examples, the target goal can be associated with a QoS level that is specified for the workload, either by agreement or some other mechanism.

FIG. 2 shows an example of the service exchange 110 with a management engine 202 according to some implementations. The management engine 202 can be implemented with a combination of machine-executable instructions and processing hardware, or with just processing hardware. The management engine 202 can be separate from the message broker 114, or can be part of the message broker 114.

The service exchange 110 includes the orchestrator 112 and message broker 114 as discussed above. Also, the service exchange 110 includes adapters 116-1 to 116-N (N>1) that are provided between the message broker 114 and respective applications 118-1 to 118-N.

The management engine 202 includes a performance monitor 112 that can monitor the performance of a workflow that is executed by the orchestrator 112. The management engine 202 also is able to access a database 206 that stores information relating to target goals 208 associated with respective workloads that can be executed by the orchestrator 112, where the target goals can be specified by an SLA or a QoS level. The database 206 can also store metrics and thresholds.

According to the examples described above, the performance monitor 112 can detect a time when a request is received to initiate a workflow. In response to the request, the performance monitor 112 can measure the amount of time that has passed since the time the request was received. The performance monitor 112 can retrieve information relating to a target goal associated with the workflow. The performance monitor 112 can compare the elapsed time with the target goal to determine whether execution of the workflow will satisfy a target maximum time duration specified by the respective target goal. If not, the performance monitor 112 can issue an indication to a handler 210 of the management engine 202 to handle the potential violation of the target goal to perform remediation.

In some examples, in a workflow that includes multiple tasks, the target goal can specify the maximum time duration from when task i (of an application of any of multiple data centers) begins to when task i is expected to complete. The performance monitor 204 is able to compare the elapsed execution time for task i with the maximum time duration of the target goal. If violation of the maximum time duration has or is predicted to occur, then the performance monitor 204 issues an indication to the handler 210, which can take action to resolve the issue. For example, the handler 210 can cause additional computing resources to be allocated to the workflow, so that the workflow can execute at a faster rate to meet the target goal.

The performance monitor 204 can also monitor for other events associated with the workflow. For example, the performance monitor 204 can determine the error rate associated with execution of the workflow, or can determine the amount of resources used by the execution of the workflow. If the error rate or resource usage exceeds a specified threshold (e.g. a threshold error rate or a threshold resource consumption), then the performance monitor 204 can issue a respective indication to cause the handler 210 to take a corresponding action, such as to allocate a different set of resources to execute the workflow (if a currently allocated set of resources is causing an excessive error rate), or to reduce an allocation of resources (if the workflow is consuming an excessive amount of resources).

FIG. 3 shows an example in which the service exchange 110 according to some implementations of the present disclosure is able to interact with a legacy integration framework 302, such as an ESB framework (as discussed above), a Schools Interoperability Framework (SIF), or any other integration framework that is a different type of integration framework from the service exchange 110. SIF includes a specification for modeling data, and a Service-Oriented Architecture (SOA) specification for sharing data between institutions.

The legacy integration framework 302 integrates applications 304, such as according to enterprise architecture integration (EAI). A gateway 306 is provided between the legacy integration framework 302 and the service exchange 110 to perform protocol translation 308 and interface translation 310 between the interface 115 of the service exchange 110 and the interface 312 to the legacy integration framework 302.

The protocol translation 308 and interface translation 310 can be similar to the protocol and interface translations applied by the adapters 116 of the service exchange 110, except that the protocol translation 308 and interface translation 310 are to provide adaptation to the legacy integration framework 302 and to the applications 304 integrated by the legacy integration framework 302.

Although not shown, a bridge can also be provided between the service exchange 110 and the legacy integration framework 302 if the service exchange 110 and the legacy integration framework 302 are in different data centers. The bridge can include gateways similar to the gateways 142 and 144 at each end discussed above.

FIG. 3 also shows a portal 314. Note that a portal can also be provided in implementations according to FIGS. 1 and 2. The portal 314 is an example of an entity interacting with the API 105 for triggering workflows or of orchestrated applications. Although FIG. 3 shows the portal 314 as using the message broker 104, it is noted that the portal 314 can also be one of the applications orchestrated through a respective adapter 116.

In some examples, the portal 314 can present a user interface (UI). The portal 314 can include machine-executable instructions or a combination of machine-executable instructions and processing hardware. The portal 314 can be at a computer (e.g. client computer) that can be remote from the service exchange 110. The UI allows a user to interact with the service exchange 110.

A user can perform an action in the UI that triggers the execution of a flow logic (of multiple different flow logic) by the orchestrator 112 to perform a workflow.

An indication of user action in the UI (e.g. an action to order an item or service) can be communicated to the orchestrator 112 and the corresponding workflow by the portal 314 and the message broker 114. The indication can be communicated using the API 105 (e.g. REST API) of the message broker 114.

This indication of user action received by the message broker 114 can be communicated to the orchestrator 112, which invokes execution of the corresponding flow logic to perform the requested workflow.

FIG. 4 is a flow diagram of a process performed by a service exchange (e.g. 110 or 130).

An orchestrator (e.g. 112 or 132) of the service exchange in a first data center executes (at 402) a workflow that is associated with a target performance goal, where the workflow includes tasks of applications of the first data center and of a second data center.

During the executing of the workflow, information is communicated (at 404) between the orchestrator and the processes of the applications through a message broker (e.g. 114 or 134) of the service exchange.

Adapters (e.g. 116 or 136) of the service exchange perform (at 406) protocol and interface translations for information communicated between the message broker and the applications in the first data center.

A management engine (e.g. 202) monitors (at 408) performance of the workflow to determine whether the executing of the workflow is able to meet the target goal.

Content of the service exchange platform including the orchestrator, the message broker, and the adapters can be changed, such as from an administration system coupled to the service exchange. Applications can be changed, flow logic can be changed, and use cases can be created.

Any given application can be updated or replaced, simply by replacing or modifying the corresponding adapter. For example, if an enterprise wishes to upgrade or replace a given application (with a new application or an updated version of the given application), then the corresponding adapter to which the given application is coupled can be replaced or updated to support the updated or replaced application. In some cases, replacing the given application can involve replacing a first application supplied by a first vendor with a second application supplied by a different vendor. In other cases, replacing the given application can involving replacing a first application supplied by a vendor with another application supplied by the same vendor. As yet another example, replacing the given application can include upgrading the application to a new release.

Changing a given adapter can involve removing a representation of the adapter (which can be in the form of program code, a markup language file, or some other representation), and replacing the removed representation of the adapter with a new representation of a different adapter. Changing the given adapter can alternatively involve modifying the given adapter or modifying a configuration of the given adapter to support the different application. The changing of the given adapter can be performed by a machine or by program code, either autonomously (such as in response to detection of a replacement of an application) or in response to user input.

Changing an application may also involve moving an instance of the application from one instance to another instance, or from one location to another location. The respective adapter can be updated or configuration of the adapter is changed (the adapter itself remains unchanged), to refer to another application instance or to an instance of the application at another location.

When changing an application to a new or updated application, it may be possible that certain functionality of the previous application is no longer available from the new or updated application. In this case, the respective adapter can delegate or orchestrate with another application (or web service) that provides the missing functionality. Alternatively, the workflow can be modified to take into account the loss of functionality in the use case.

Also if new functionality is provided by new or upgraded application, the workflow can be modified to use the new functionality.

In accordance with some implementations, a workflow can be modified relatively easily by changing the respective flow logic with a different flow logic (a modified version of the flow logic or a new flow logic). The different flow logic can then be loaded onto the orchestrator to implement the modified workflow. By using the service exchange, workflows can be easily customizable by providing new or modified flow logic to the orchestrator. Nothing else has to be changed unless a new use case specifies use of new calls and data not covered in current adapters (e.g. an adapter is able to call just a subset of APIs of the application) or the canonical model. In this latter case, the canonical data model can be updated and adapters can be updated to be able to make the calls, or new adapters can be provided.

New use cases can also be created, and corresponding flow logic and adapters can be provided. In addition, the canonical data model may be updated accordingly.

The content changes noted above can be performed using any of various tools, such as a Software Development Kit (SDk) tool or other type of tool used to create applications and other program code. A content pack can be updated using the tool, and the modified content pack can be loaded using an administration system. The administration system can configure the adapters to point to the correct instance of an application. A new use case and respective content can be also created with an SDk tool. Note also that when the canonical data model 107 is updated, the canonical data model 107 remains backwards compatible with content packs of existing use cases.

FIG. 5 is a block diagram of an example computer system 500 according to some implementations, which can be used to implement the service exchange 110 or 130 according to some implementations. The computer system 500 can include one computer or multiple computers coupled over a network. The computer system 500 includes a processor (or multiple processors) 502. A processor can include a microprocessor, a microcontroller, a physical processor module or subsystem, a programmable integrated circuit, a programmable gate array, or another physical control or computing device.

The processor(s) 502 can be coupled to a non-transitory machine-readable or computer-readable storage medium 504, which can store various machine-executable instructions. The machine-executable instructions can include orchestration instructions 506 to implement the orchestrator 112 or 132, message broker instructions 508 to implement the message broker 114 or 134 (including the message broker application 410 and event handlers 414 shown in FIG. 3), adapter instructions 510 to implement the adapters 116, management engine instructions 512 to implement the management engine 202 (including the performance monitor 204 and the handler 210), and message confirmation instruction 514 to implement the message confirmation engine 119 or 139.

The storage medium (or storage media) 504 can include one or multiple forms of memory including semiconductor memory devices such as dynamic or static random access memories (DRAMs or SRAMs), erasable and programmable read-only memories (EPROMs), electrically erasable and programmable read-only memories (EEPROMs) and flash memories; magnetic disks such as fixed, floppy and removable disks; other magnetic media including tape; optical media such as compact disks (CDs) or digital video disks (DVDs); or other types of storage devices. Note that the instructions discussed above can be provided on one computer-readable or machine-readable storage medium, or alternatively, can be provided on multiple computer-readable or machine-readable storage media distributed in a large system having possibly plural nodes. Such computer-readable or machine-readable storage medium or media is (are) considered to be part of an article (or article of manufacture). An article or article of manufacture can refer to any manufactured single component or multiple components. The storage medium or media can be located either in the machine running the machine-readable instructions, or located at a remote site from which machine-readable instructions can be downloaded over a network for execution.

In the foregoing description, numerous details are set forth to provide an understanding of the subject disclosed herein. However, implementations may be practiced without some of these details. Other implementations may include modifications and variations from the details discussed above. It is intended that the appended claims cover such modifications and variations.

Claims

1. A system comprising:

a service exchange comprising: an orchestrator to execute a workflow that involves a plurality of applications and services of a plurality of data centers; a message broker to exchange messages that comprise a call from the orchestrator to at least one of the applications and the services, and the orchestrator to react to an event or call from the at least one application or service; and adapters to perform protocol and interface translations for information communicated between at least some of the applications and the message broker.

2. The system of claim 1, wherein the message broker is to check for confirmation of receipt of a given message from a target to which the given message is sent, and to cause resending of the given message in response to failure to receive the confirmation of receipt.

3. The system of claim 2, wherein the message broker is to receive the confirmation of receipt of the given message from one of the adapters.

4. The system of claim 1, wherein the message broker is to check for confirmation of commit of processing of a given message from a target to which the given message is sent, and to cause resending of the given message or indicating an error in response to failure to receive the confirmation of receipt.

5. The system of claim 4, wherein the message broker is to further perform rollback or notification of at least one management system in response to failure to receive the confirmation of receipt.

6. The system of claim 1, wherein the message broker is to check for loss or delay of the given message, and to perform remediation in response to detecting the loss or the delay of the given message.

7. The system of claim 1, wherein the message broker is to apply security to a message communicated to a target.

8. The system of claim 1, wherein the service exchange is part of a first data center of the plurality of data centers, and the service exchange further comprising a gateway to communicate over a network with a second data center of the plurality of data centers, the gateway to convert between a protocol used by the message broker and a protocol used over the network.

9. The system of claim 8, wherein the gateway is to check for loss or delay of a given message sent over the network, and to perform remediation in response to detecting the loss or the delay of the given message.

10. The system of claim 9, wherein the gateway is to add numbers and timestamps to messages or packets according to extensions supported by a WebSocket protocol.

11. The system of claim 8, wherein the gateway is to apply security to a message communicated to a target over the network.

12. The system of claim 1, wherein the message broker is to communicate interact through a gateway with an integration framework.

13. The system of claim 12, wherein the integration framework is selected from among an Enterprise Service Bus (ESB) framework and a Schools Interoperability Framework (SIF).

14. The system of claim 1, wherein the service exchange is part of a first data center of the plurality of data centers, and wherein services of a second data center of the plurality of data centers comprise software as a service (SaaS) services.

15. The system of claim 1, further comprising a management engine to manage a target goal associated with communications with the applications and the services across the plurality of data centers.

16. The system of claim 15, wherein the target goal specifies a target time duration goal for communication with an application or service.

17. The system of claim 15, wherein the target goal is specified by a service level agreement or a quality of service.

18. A method comprising:

executing, by an orchestrator of a service exchange in a first data center, a workflow that is associated with a target performance goal, the workflow comprising tasks of applications of the first data center and of a second data center;
communicating, during the executing of the workflow, information between the orchestrator and the processes of the applications through a message broker of the service exchange;
performing, by adapters of the service exchange, protocol an interface translations for information communicated between the message broker and the processes of the applications in the first data center; and
monitoring, by a management engine, performance of the workflow to determine whether the executing of the workflow is able to meet the target performance goal.

19. The method of claim 18, wherein monitoring the performance of the workflow comprises determining whether the executing of the workflow is able to satisfy a time duration goal of the workflow.

20. The method of claim 18, wherein monitoring the performance of the workflow comprises determining whether the executing of the workflow is able to satisfy an error rate goal or a resource consumption goal of the workflow.

21. An article comprising at least one non-transitory machine-readable storage medium storing instructions that upon execution cause a system to:

execute, by an orchestrator of a service exchange in a first data center, a workflow, the workflow comprising tasks of applications across a plurality of data centers, at least one of the data centers comprising a cloud of resources;
communicate, during the executing of the workflow, information between the orchestrator and the processes of the applications through a message broker of the service exchange;
checking, by the message broker, for confirmation of receipt and for confirmation of commit of processing of a given message sent to a target;
performing a remediation action in response to failing to receive the confirmation of receipt or the confirmation of commit; and
perform, by adapters of the service exchange, protocol and interface translations for information communicated between the information broker and at least some of the applications.
Patent History
Publication number: 20150163179
Type: Application
Filed: Dec 8, 2014
Publication Date: Jun 11, 2015
Inventors: Stephane Herman Maes (Fremont, CA), Woong Joseph Kim (Milford, CT), Ankit Ashok Desai (Santa Clara, CA), Christopher William Johnson (Evergreen, CO)
Application Number: 14/563,331
Classifications
International Classification: H04L 12/58 (20060101);