Abstract: A system and method for the estimation of the cardinality of large sets of transaction trace data is disclosed. The estimation is based on HyperLogLog data sketches that are capable to store cardinality relevant data of large sets with low and fixed memory requirements. The disclosure contains improvements to the known analysis methods for HyperLogLog data sketches that provide improved relative error behavior by eliminating a cardinality range dependent bias of the relative error. A new analysis method for HyperLogLog data structures is shown that uses maximum likelihood analysis methods on a Poisson based approximated probability model. In addition, a variant of the new analysis model is disclosed that uses multiple HyperLogLog data structured to directly provide estimation results for set operations like intersections or relative complement directly from the HyperLogLog input data.
Abstract: A system and method is disclosed for the combined analysis of transaction execution monitoring data and a topology model created from infrastructure monitoring data of computing systems involved in the execution of the monitored transactions. Monitored communication activities of transactions are analyzed to identify intermediate processing nodes between sender and receiver side and to enrich transaction monitoring data with data describing those intermediate processing nodes. The topology model may also be improved by the combined analysis, as functionality and services provided by elements of the topology model may be derived by the involvement of those elements in the execution of monitored transactions. The result of the combined analysis is used by an automated anomaly detection and causality estimation system. The combined analysis may also reveal entities of a monitored environment that are used by transaction executions but which are not monitored.
Type:
Application
Filed:
July 22, 2022
Publication date:
January 12, 2023
Applicant:
Dynatrace LLC
Inventors:
Herwig MOSER, Michael KOPP, Ernst AMBICHL
Abstract: A system and method for the analysis of log data is presented. The system uses SuperMinHash based locality sensitive hash signatures to describe the similarity between log lines. Signatures are created for incoming log lines and stored in signature indexes. Later similarity queries use those indexes to improve the query performance. The SuperMinHash algorithm uses a two staged approach to determine signature values, one stage uses a first random number to calculate the index of the signature value that is to update. The two staged approach improves the accuracy of the produced similarity estimation data for small sized signatures. The two staged approach may further be used to produce random numbers that are related, e.g. each created random number may be larger than its predecessors. This relation is used to optimize the algorithm by determining and terminating when further created random numbers have no influence on the created signature.
Abstract: A system and method for the aggregation and grouping of previously identified, causally related abnormal operating condition, that are observed in a monitored environment, is disclosed. Agents are deployed to the monitored environment which capture data describing structural aspects of the monitored environment, as well as data describing activities performed on it, like the execution of distributed transactions. The data describing structural aspects is aggregated into a topology model which describes individual components of the monitored environments, their communication activities and resource dependencies and which also identifies and groups components that serve the same purpose, like e.g. processes executing the same code. Activity related monitoring data is constantly monitored to identify abnormal operating conditions. Data describing abnormal operating condition is analyzed in combination with topology data to identify networks of causally related abnormal operating conditions.
Abstract: A system and method is disclosed for the automated identification of causal relationships between a selected set of trigger events and observed abnormal conditions in a monitored computer system. On the detection of a trigger event, a focused, recursive search for recorded abnormalities in reported measurement data, topological changes or transaction load is started to identify operating conditions that explain the trigger event. The system also receives topology data from deployed agents which is used to create and maintain a topological model of the monitored system. The topological model is used to restrict the search for causal explanations of the trigger event to elements of that have a connection or interact with the element on which the trigger event occurred. This assures that only monitoring data of elements is considered that are potentially involved in the causal chain of events that led to the trigger event.
Type:
Grant
Filed:
March 11, 2021
Date of Patent:
November 15, 2022
Assignee:
Dynatrace LLC
Inventors:
Ernst Ambichl, Herwig Moser, Otmar Ertl
Abstract: Technologies are disclosed for the automated, rule-based generation of models from arbitrary, semi-structured observation data. Context data of received observation data, like data describing the location of on which a phenomenon was observed, is used to identify related observations, to generate entities in a model describing the observed data and to assign observations to model data. Mapping rules may be used for the on-demand generation of models, and different sets of mapping rules may be used to generate different models out of the same observation data for different purposes. Further, observation time data may be used to observer the temporal evolution of the generated model. Possible use cases of the so generated models include the interpretation of observation data that describes unexpected operation conditions in view of the generated model, or to determine how a monitored system reacts on changing conditions, like increased load.
Type:
Application
Filed:
April 29, 2022
Publication date:
November 10, 2022
Applicant:
Dynatrace LLC
Inventors:
Herwig MOSER, Martin CARPELLA, Otmar ERTL
Abstract: A system and method is disclosed for identifying and evaluating the business relevant impact of observed operating anomalies of monitored components of computing environments like data centers or cloud computing environments. The disclosed technology uses end-to-end transaction trace, availability and resource utilization data in combination with topology data received from agents deployed to the monitored computing environment. An abnormal operating condition is localized within a topological model of the monitored environment and has a defined temporal extent. On detection of an anomaly, affected transaction traces are selected that used the topology entity on which the anomaly was observed while the anomaly existed. Those transactions are then traced backwards, until a topology entity is reached that represents an entry point of monitored system.
Abstract: A system and method for the distributed analysis of high frequency transaction trace data to constantly categorize incoming transaction data, identify relevant transaction categories, create per-category statistical reference and current data and perform statistical tests to identify transaction categories showing overall statistically relevant performance anomalies. The relevant transaction category detection considers both the relative transaction frequency of categories compared to the overall transaction frequency and the temporal stability of a transaction category over an observation duration. The statistical data generated for the anomaly tests contains next to data describing the overall performance of transactions of a category also data describing the transaction execution context, like the number of concurrently executed transactions or transaction load during an observation period.
Abstract: A system and method is disclosed for the combined analysis of transaction execution monitoring data and a topology model created from infrastructure monitoring data of computing systems involved in the execution of the monitored transactions. Monitored communication activities of transactions are analyzed to identify intermediate processing nodes between sender and receiver side and to enrich transaction monitoring data with data describing those intermediate processing nodes. The topology model may also be improved by the combined analysis, as functionality and services provided by elements of the topology model may be derived by the involvement of those elements in the execution of monitored transactions. The result of the combined analysis is used by an automated anomaly detection and causality estimation system. The combined analysis may also reveal entities of a monitored environment that are used by transaction executions but which are not monitored.
Type:
Grant
Filed:
February 15, 2019
Date of Patent:
September 13, 2022
Assignee:
Dynatrace LLC
Inventors:
Herwig Moser, Michael Kopp, Ernst Ambichl
Abstract: A system and method for the analysis of log data is presented. The system uses SuperMinHash based locality sensitive hash signatures to describe the similarity between log lines. Signatures are created for incoming log lines and stored in signature indexes. Later similarity queries use those indexes to improve the query performance. The SuperMinHash algorithm uses a two staged approach to determine signature values, one stage uses a first random number to calculate the index of the signature value that is to update. The two staged approach improves the accuracy of the produced similarity estimation data for small sized signatures. The two staged approach may further be used to produce random numbers that are related, e.g. each created random number may be larger than its predecessors. This relation is used to optimize the algorithm by determining and terminating when further created random numbers have no influence on the created signature.
Abstract: A system and method is proposed for estimating the contribution of components of a distributed computing environment to the generation of economically relevant values, like e.g., revenue numbers. Agents are deployed to the computing environment that trace executed transactions and that monitor components used to execute those transactions. The transaction trace data also contains data about the origin/user of transactions, which may be used to group transactions corresponding to particular interactions of individual users with the monitored application into visit data. Data describing economically relevant activities of transactions, like the purchase of goods, are also observed by agents and reported in trace data. Functional dependencies described in transaction trace data and resource related dependencies derived from component monitoring data are used to identify functionality and components that contributed to the generation of business value.
Abstract: A system and method for the distributed analysis of high frequency transaction trace data to constantly categorize incoming transaction data, identify relevant transaction categories, create per-category statistical reference and current data and perform statistical tests to identify transaction categories showing overall statistically relevant performance anomalies. The relevant transaction category detection considers both the relative transaction frequency of categories compared to the overall transaction frequency and the temporal stability of a transaction category over an observation duration. The statistical data generated for the anomaly tests contains next to data describing the overall performance of transactions of a category also data describing the transaction execution context, like the number of concurrently executed transactions or transaction load during an observation period.
Abstract: A system and method for the aggregation and grouping of previously identified, causally related abnormal operating condition, that are observed in a monitored environment, is disclosed. Agents are deployed to the monitored environment which capture data describing structural aspects of the monitored environment, as well as data describing activities performed on it, like the execution of distributed transactions. The data describing structural aspects is aggregated into a topology model which describes individual components of the monitored environments, their communication activities and resource dependencies and which also identifies and groups components that serve the same purpose, like e.g. processes executing the same code. Activity related monitoring data is constantly monitored to identify abnormal operating conditions. Data describing abnormal operating condition is analyzed in combination with topology data to identify networks of causally related abnormal operating conditions.
Abstract: A system is provided for tracing end-to-end transactions. The system uses bytecode instrumentation and a dynamically injected agent to gather web server side tracing data, and a browser agent which is injected into browser content to instrument browser content and to capture tracing data about browser side activities. Requests sent during monitored browser activities are tagged with correlation data. On the web server side, this correlation information is transferred to tracing data that describes handling of the request. This tracing data is sent to an analysis server which creates tracing information which describes the server side execution of the transaction and which is tagged with the correlation data allowing the identification of the causing browser side activity.
Type:
Application
Filed:
January 18, 2022
Publication date:
June 23, 2022
Applicant:
Dynatrace LLC
Inventors:
Bernd GREIFENEDER, Helmut SPIEGL, Markus GAISBAUER, Clemens FUCHS
Abstract: A technology is disclosed to perform real-time and online identification and prioritization of vulnerabilities of components of software applications. Agents are deployed to components of monitored applications that monitor and report application topology, communication, code execution and code loading activity. Reported code loading and execution activity data is used to detect the loading and execution of vulnerable code, topology and communication data is used to create a topology model of the application containing communication paths, trust boundaries and location of sensitive data. The analysis of code loading and execution data reveals the extend to which vulnerable code is used by monitored application components. The topology data combined with code execution data reveals the extent to which components executing vulnerable code are exposed to untrusted entities and/or accessing sensitive data.
Abstract: A system and a method for grouping log lines in application log file is presented. The system uses logging framework code instrumentation in order to obtain a relation between the actual log line and the place in source code from which a method responsible for writing the line has been called. As the all information on the relation is stored in external metadata files, the structure of the log files remains unchanged. Using the above mentioned metadata and a raw log file, each log line in the file can be assigned to a group related to the place in source code from which the line has been logged, by the system. Next such a grouped log file can be displayed to the user in order to simplify the analysis of the application behavior.
Abstract: A technology is disclosed for estimating the impact that heap memory allocations have on the behavior of garbage collection activities. A sampling mechanism randomly and unbiased selects a subset of allocations for detailed analysis. A detailed analysis is performed for the selected allocation activities. Allocation monitoring data, including type and size of the allocated object and data describing the code location on which the allocation was performed are gathered. Further, the point in time, when the allocated object is later reclaimed by garbage collection is recorded. Gathered object allocation and reclaim data are used to estimate for individual allocation sites or types of allocated objects, the number of bytes that are allocated, and the number of bytes that survived a garbage collection run. Allocation activity causing frequently garbage collection runs is identified using allocation size data and the survived byte counts are used to identify allocation activity causing long garbage collection runs.
Abstract: A technology to identify processing paths of untrusted input data received by applications that are vulnerable to attacks and to further detect and prevent actual attacks that try to exploit those vulnerabilities is disclosed. Application code is augmented at run-time with sensor code which detects the entry of input-data into the application and further traces the propagation, manipulation and, sanitization of this input-data until its usage in a data sink. The so generated data-flow traces reveal data-flow paths that lack required sanitization measures to neutralize potentially harmful input-data. Such data-flow paths are reported as vulnerabilities. Further, input-data that reaches data-sink interfaces is scanned by data-sink sensors to identify harmful input data. On identification of harmful input data, an attack is reported, and countermeasures are applied to prevent the identified attack.
Abstract: A system and method for the aggregation and grouping of previously identified, causally related abnormal operating condition, that are observed in a monitored environment, is disclosed. Agents are deployed to the monitored environment which capture data describing structural aspects of the monitored environment, as well as data describing activities performed on it, like the execution of distributed transactions. The data describing structural aspects is aggregated into a topology model which describes individual components of the monitored environments, their communication activities and resource dependencies and which also identifies and groups components that serve the same purpose, like e.g. processes executing the same code. Activity related monitoring data is constantly monitored to identify abnormal operating conditions. Data describing abnormal operating condition is analyzed in combination with topology data to identify networks of causally related abnormal operating conditions.
Abstract: A system and method for real-time discovery and monitoring of multidimensional topology models describing structural aspects of applications and of computing infrastructure used to execute those applications is disclosed. Different types of agents are deployed to the monitored application execution infrastructure dedicated to capture specific topological aspects of the monitored system. Virtualization agents detect and monitor the virtualization structure of virtualized hardware used in the execution infrastructure, operating system agents deployed to individual operating systems monitor resource utilization, performance and communication of processes executed by the operating system and transaction agents deployed to processes participating in the execution of transactions, providing end-to-end transaction trace and monitoring data describing individual transaction executions.
Type:
Application
Filed:
October 22, 2021
Publication date:
February 10, 2022
Applicant:
Dynatrace LLC
Inventors:
Bernd GREIFENEDER, Ernst AMBICHL, Andreas LEHOFER, Gunther SCHWARZBAUER, Helmut SPIEGL, Rafal MLOTOWSKI