Abstract: Methods and systems for improved integration functions for applications are provided. In one embodiment, a method is provided that includes receiving a request to execute an application. The request may specify a primary container image for the application and a secondary container image for an integration function used by the application. A primary container may be created for execution of the primary container image and a secondary container image may be created for execution of the secondary container image. The primary and secondary containers may be executed to implement the application.
Abstract: Methods and systems for improved integration functions for applications are provided. In one embodiment, a method is provided that includes receiving a request to execute an application. The request may specify a primary container image for the application and a secondary container image for an integration function used by the application. A primary container may be created for execution of the primary container image and a secondary container image may be created for execution of the secondary container image. The primary and secondary containers may be executed to implement the application.
Abstract: The present disclosure provides for systems and methods for dynamically managing a concurrency limit of a serverless function, in other words, a quantity of instances of a serverless function that may be concurrently executed. Performance metrics of the serverless function as it is implemented by services may be measured and compared against preconfigured thresholds. If the performance metrics meet the preconfigured thresholds, the concurrency limit of the serverless function may be increased. In some aspects, if one or more performance metrics fails to meet a respective preconfigured threshold, the concurrency limit of the serverless function may be decreased.
Abstract: Systems and methods for auditing batch jobs with blockchain transactions are provided. In one embodiment, a method is provided that includes running a batch job on a client machine to download one or more files from a server machine to the client machine and determining a batch job result of the batch job. The method may further include generating a batch result transaction at the client machine. The batch result transaction may include the batch job result. In certain embodiments, the method may proceed with adding the batch result transaction to the blockchain.
Abstract: The present disclosure provides for a system that dynamically adjusts how the system distributes messages to a set of consumers. The system measures a quantity of consumers in communication with the system. The system also measures performance metrics of each respective consumer of the set of consumers. In response to a change in the quantity of consumers, or to one or more performance metrics of an individual consumers meeting, or failing to meet, a respective predetermined threshold, the system may adjust a cache size the system attributes to the individual consumer and accordingly may adjust how the system distributes messages to the individual consumer. For instance, the system may distribute more or less messages to the individual consumer. The individual consumer may also communicate a maximum cache limit it is able to receive in messages from the system.
Abstract: The present disclosure provides for a system with an adaptive thread pool for processing messages. The system includes a processor and a memory storing instructions. The processor allocates a first quantity of threads in a thread pool to process a set of messages in parallel. The processor then measures one or more performance metrics of the system while processing the messages with the first quantity of threads. The processor then determines whether each of the one or more performance metrics meets a respective predetermined threshold. The processor then increases the allocation of the first quantity of threads to a second quantity of threads in the thread pool if each of the one or more performance metrics meets the respective predetermined threshold. The processor may also decrease the quantity of threads if at least one performance metric does not meet its predetermined threshold.
Abstract: The present disclosure provides for a system with an adaptive thread pool for processing messages. The system includes a processor and a memory storing instructions. The processor allocates a first quantity of threads in a thread pool to process a set of messages in parallel. The processor then measures one or more performance metrics of the system while processing the messages with the first quantity of threads. The processor then determines whether each of the one or more performance metrics meets a respective predetermined threshold. The processor then increases the allocation of the first quantity of threads to a second quantity of threads in the thread pool if each of the one or more performance metrics meets the respective predetermined threshold. The processor may also decrease the quantity of threads if at least one performance metric does not meet its predetermined threshold.