Patents by Inventor Andrey Karpovsky

Andrey Karpovsky has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240152798
    Abstract: Some embodiments select a machine learning model training duration based at least in part on a fractal dimension calculated for a training data dataset. Model training durations are based on one or more characteristics of the data, such as a fractal dimension, a data distribution, or a spike count. Default long training durations are sometimes replaced by shorter durations without any loss of model accuracy. For instance, the time-to-detect for a model-based intrusion detection system is shortened by days in some circumstances. Model training is performed per a profile which specifies particular resources or particular entities, or both. Realistic test data is generated on demand. Test data generation allows the trained model to be exercised for demonstrations, or for scheduled confirmations of effective monitoring by a model-based security tool, without thereby altering the model's training.
    Type: Application
    Filed: November 9, 2022
    Publication date: May 9, 2024
    Inventors: Andrey KARPOVSKY, Eitan SHTEINBERG, Tamer SALMAN
  • Publication number: 20240137376
    Abstract: The techniques disclosed herein prevent a rogue resource from being created within a cloud computing environment. For example, a rogue serverless function may be prevented from integrating with a cloud-based database, thereby preventing the serverless function from performing malicious operations such as low-rate data exfiltration. The rogue serverless function is detected before it is installed, heading off the attack completely. In some configurations, a key retrieval request is received. Parameters of the key retrieval request are analyzed for anomalies, and anomalous key retrieval requests are stored in a pool. Then, when a request to create a resource is received, the pool of anomalous key retrieval requests is searched for a match. When a match is found, the resource creation request may be suspended pending a further security review.
    Type: Application
    Filed: December 27, 2022
    Publication date: April 25, 2024
    Inventors: Evgeny BOGOKOVSKY, Ram Haim PLISKIN, Andrey KARPOVSKY
  • Publication number: 20240095352
    Abstract: Files uploaded to a cloud storage medium are considered. The files may include a mixture of files known to be malicious and known to be benign. The files are clustered using similarity of file features, e.g., based on distance in a feature space. File clusters may then be used to determine a threat status of an unknown file (a file whose threat status is unknown initially). A feature of the unknown file in the feature space is determined, and a distance in the feature space between the file and a file cluster is calculated. The distance between the unknown file and the file cluster is used to determine whether or not to perform a deep scan on the unknown file. If such a need is identified, and the deep scan indicates the unknown file is malicious, a cybersecurity action is triggered.
    Type: Application
    Filed: December 15, 2022
    Publication date: March 21, 2024
    Inventors: Tamer SALMAN, Andrey KARPOVSKY
  • Patent number: 11936669
    Abstract: Unauthorized use of user credentials in a network is detected. Data indicative of text strings being used to access resources in the network is accessed. Regex models are determined for the text strings. Troupings of the regex models are determined based on an optimization of a cumulative weighted function. A regex model having a cumulative weighted function that exceeds a predetermined threshold is identified. An alert is generated when the cumulative weighted function for the identified regex model exceeds the predetermined threshold.
    Type: Grant
    Filed: October 4, 2022
    Date of Patent: March 19, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Andrey Karpovsky, Tomer Rotstein, Fady Nasereldeen, Naama Kraus, Roy Levin, Yotam Livny
  • Publication number: 20240056486
    Abstract: Some embodiments automatically reduce or remove gaps between a data resource's actual policy and an optimal policy. Policy gaps may arise when a different kind of data is added to the resource after the policy was set, or when the original policy is deemed inadequate, for example. An embodiment obtains a characterization of the resource's data in terms of sensitivity, criticality, or category, captured in scores or labels. The embodiment locates the resource's current policy, and conforms the policy with best practices, by modifying or replacing the policy as indicated. Policy adjustments may implement recommendations that were generated by an artificial intelligence model. Policy adjustments may be periodic, ongoing, or driven by specified trigger events. Policy conformance of particular resource sets may be prioritized. Automated policy conformance improves security, operational consistency, and computational efficiency, and relieves personnel of tedious and error-prone tasks.
    Type: Application
    Filed: August 10, 2022
    Publication date: February 15, 2024
    Inventors: Sagi LOWENHARDT, Andrey KARPOVSKY
  • Publication number: 20240007490
    Abstract: According to examples, an apparatus may include a processor that may calculate a normalized threat intelligence score (TIS) for an autonomous system (AS) based on a sum of threat intelligence (TI) signals associated with Internet protocol (IP) addresses controlled by the AS and a count of the IP addresses controlled by the AS. The processor may also determine, based on the normalized TIS for the AS, a probability that activities associated with the IP addresses controlled by the AS are likely to be malicious. The processor may further output the determined probability that the activities associated with the IP addresses controlled by the AS are likely to be malicious.
    Type: Application
    Filed: June 29, 2022
    Publication date: January 4, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Shay Chriba SAKAZI, Andrey KARPOVSKY, Moshe ISRAEL
  • Publication number: 20230418948
    Abstract: A computing system and method for training one or more machine-learning models to perform anomaly detection. A training dataset is accessed. An overall sensitivity score is determined that indicates an amount of sensitive data in the training dataset. Machine-learning models are trained based on the training dataset and the overall sensitivity score. The machine-learning models use the overall sensitivity score to determine a threshold. The threshold is relatively low for datasets having a large amount of sensitive data and is relatively high for dataset having a small among of sensitive data. When executed, the machine-learning models determine if a probability score of features extracted from a received dataset are above the determined threshold when a second overall sensitivity score of the received dataset is substantially similar to the overall sensitivity score. When the probability score is above the determined threshold, the machine-learning models cause an alert to be generated.
    Type: Application
    Filed: June 23, 2022
    Publication date: December 28, 2023
    Inventors: Andrey KARPOVSKY, Sagi LOWENHARDT, Shimon EZRA
  • Patent number: 11856015
    Abstract: An anomalous action security assessor is disclosed. An anomaly is received from a set of anomalies. A series of linked queries associated with the anomaly is presented to the user. The series of linked queries includes a base query and a subquery. The base query tests an attribute of the anomaly and resolves to a plurality of outcomes of the base query. The subquery is associated with an outcome of the plurality of outcomes of the base query. The series of linked queries finally resolve to one of tag the anomaly and dismiss the anomaly. A security alert is issued if the series of linked queries finally resolves to tag the anomaly.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: December 26, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Roy Levin, Andrey Karpovsky
  • Publication number: 20230403289
    Abstract: A computing system generates from received user input an initial profile. The initial profile specifies expected behavioral patterns of datasets that are to be received by the computing system. The computing system extracts from received datasets features that are indicative of behavioral patterns of the received datasets. The computing system provides the initial profile to first machine-learning models. The first machine-learning models have been trained using a subset of the received datasets. The first machine-learning models use the initial profile to determine if the behavioral patterns of the received datasets are anomalous. The computing system includes second machine-learning models that have been trained using a subset of the received datasets. The second machine-learning models train a second profile based on the extracted features to specify behavioral patterns of the received datasets that are learned by the second machine-learning model.
    Type: Application
    Filed: June 14, 2022
    Publication date: December 14, 2023
    Inventors: Andrey Karpovsky, Idan Hen
  • Publication number: 20230343072
    Abstract: The disclosed technology is generally directed to data classification. In one example of the technology, training data and a ground truth that indicates sensitive data within the training data is received. Based at least on the training data, natural language processing is used to learn features. The features include a naming feature that is associated with names of data resources in the training data. Based at least on the training data and the ground truth, using supervised learning, a model that is a heuristic model and/or a machine learning model is created. Input data information that is associated with input data is received. The model is used to determine a data resource sensitivity estimator (DRSE) value for each portion of the input data. The determination is based on the combination of features for the input data. Potentially sensitive data within the input data is flagged based on the DRSE values.
    Type: Application
    Filed: April 25, 2022
    Publication date: October 26, 2023
    Inventors: David TRIGANO, Andrey KARPOVSKY, Basel SHAHEEN
  • Publication number: 20230300156
    Abstract: Multiple variate anomaly detection against multiple scopes of the requested resource. Even if one of the variates patterns is sensitive to physical location of the requestor and/or the resource, not all of the variates of the access pattern will be. Furthermore, even if one of the scopes of the resource is sensitive to physical location of the resource, not all scopes will be. Thus, the use of multiple variates of the access pattern and multiple scopes of the anomaly detection allows for better estimates of anomaly detection to be made, even when the source of the access request is virtualized and/or the location of the resource is virtualized.
    Type: Application
    Filed: January 31, 2022
    Publication date: September 21, 2023
    Inventors: Andrey KARPOVSKY, Idan Yehoshua HEN
  • Publication number: 20230275913
    Abstract: Techniques are described herein that are capable of using graph enrichment to detect a potentially malicious access attempt. A graph that includes nodes and configuration-based links is generated. The nodes represent respective resources. Behavior-based links are added to the graph based at least in part on traffic logs associated with at least a subset of the resources. An attempt to create a new behavior-based link is identified. A probability of the new behavior-based link being created in the graph is determined. The probability is based at least in part on the configuration-based links and the behavior-based links. The new behavior-based link is identified as a potentially malicious link based at least in part on the probability being less than or equal to a threshold probability. A security action is performed based at least in part on the new behavior-based link being identified as a potentially malicious link.
    Type: Application
    Filed: February 25, 2022
    Publication date: August 31, 2023
    Inventors: Shay Chriba SAKAZI, Andrey KARPOVSKY, Amit Magen MEDINA, Tamer SALMAN
  • Publication number: 20230267198
    Abstract: Methods, systems, apparatuses, and computer-readable storage mediums described herein are configured to detect anomalous behavior with respect to control plane operations (e.g., resource management operations, resource configuration operations, resource access enablement operations, etc.). For example, a log that specifies an access enablement operation performed with respect to an entity is received. An anomaly score is generated indicating a probability whether the access enablement operation is indicative of anomalous behavior via an anomaly prediction model. A determination is made as to whether anomalous behavior has occurred with respect to the entity based at least on the anomaly score. Based on a determination that the anomalous behavior has occurred, a mitigation action may be performed that mitigates the anomalous behavior.
    Type: Application
    Filed: February 24, 2022
    Publication date: August 24, 2023
    Inventors: Andrey KARPOVSKY, Ram Haim PLISKIN, Evgeny BOGOKOVSKY
  • Publication number: 20230267199
    Abstract: Methods, systems, apparatuses, and computer-readable storage mediums are described for adapting a spike detection algorithm. A first detection algorithm that monitors a first set of events in a computing environment is executed. A set of constraint metrics in the computing environment are monitored. Based on the monitored set of constraint metrics, a second detection algorithm is generated. The second detection algorithm is an adapted version of the first detection algorithm and is configured to monitor a second set of events in the computing environment. The second detection algorithm is executed, and a remediation action is performed in response to an abnormal event detected in the computing environment by the second detection algorithm.
    Type: Application
    Filed: May 2, 2022
    Publication date: August 24, 2023
    Inventors: Omer SAVION, Andrey KARPOVSKY, Fady NASER EL DEEN
  • Publication number: 20230269262
    Abstract: Methods, systems, apparatuses, and computer-readable storage mediums described herein are configured to detect mass control plane operations, which may be indicative of anomalous (or malicious) behavior. For example, one or more logs that specify a plurality of access enablement operations performed with respect to an entity is received. The log(s) are analyzed to identify a number of access enablement operations that occurred in a particular time period. A determination is made as to whether the identified number of access enablement operations meets a threshold condition (e.g., to determine whether an unusually high number of such operations occurred in a given time period). Based on the threshold condition being met, a determination is made that anomalous behavior has occurred with respect to the entity. Responsive to determining that the potentially behavior has occurred, a mitigation action may be performed that mitigates the behavior.
    Type: Application
    Filed: February 24, 2022
    Publication date: August 24, 2023
    Inventors: Andrey KARPOVSKY, Ram Haim PLISKIN, Evgeny BOGOKOVSKY
  • Publication number: 20230231859
    Abstract: According to examples, an apparatus may include a processor and a memory on which is stored machine-readable instructions that when executed by the processor, may cause the processor to determine baseline behaviors from collected data. The processor may also detect that an anomalous event has occurred and may determine at least one feature of the anomalous event that caused the event to be determined to be anomalous. The processor may further identify, from the determined baseline behaviors, a set of baseline behaviors corresponding to the determined at least one feature. The processor may still further generate a message to include an indication that the anomalous event has been detected and the identified set of baseline behaviors and may output the generated message.
    Type: Application
    Filed: January 18, 2022
    Publication date: July 20, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Idan Yehoshua HEN, Andrey KARPOVSKY
  • Publication number: 20230224323
    Abstract: Techniques are described herein that are capable of detecting malicious obfuscation in a SQL statement based at least in part on an effect and/or processed version of the SQL statement. In a first example, a raw version of a SQL statement is compared to a processed version of the SQL statement. A determination is made that a command in the processed version is not included in the raw version. The raw version is detected to be malicious based at least in part on the determination. In a second example, a SQL statement is bound to an event that results from execution of the SQL statement. Textual content of the SQL statement and an effect of the event are compared. The SQL statement is detected to be malicious based at least in part on the effect of the event not being indicated by the textual content.
    Type: Application
    Filed: January 10, 2022
    Publication date: July 13, 2023
    Inventors: Michael MAKHLEVICH, Andrey KARPOVSKY, Fady NASER EL DEEN
  • Publication number: 20230205882
    Abstract: The detection and alerting on malicious queries that are directed towards a data store. The detection is done by using syntax metrics of the query. This can be done without evaluating (or at least without retaining) the unmasked query. In order to detect a potentially malicious query, syntax metric(s) of that query are accessed. The syntax metric(s) are then fed into a model that is configured to predict maliciousness of the query based on the one or more syntax metrics. The output of the model then represents a prediction of maliciousness of the query. Based on the output of the model representing the predicted maliciousness, a computing entity associated with the data store is then alerted.
    Type: Application
    Filed: December 29, 2021
    Publication date: June 29, 2023
    Inventors: Andrey KARPOVSKY, Michael MAKHLEVICH, Tomer ROTSTEIN
  • Publication number: 20230199003
    Abstract: The embodiments described herein are directed to generating labels for alerts and utilizing such labels to train a machine learning algorithm for generating more accurate alerts. For instance, alerts may be generated based on log data generated from an application. After an alert is issued, activity of a user in relation to the alert is tracked. The tracked activity is utilized to generate a metric for the alert indicating a level of interaction between the user and the alert. Based on the metric, the log data on which the alert is based is labeled as being indicative of one of suspicious activity or benign activity. During a training process, the labeled log data is provided to a supervised machine learning algorithm that learns what constitutes suspicious activity or benign activity. The algorithm generates a model, which is configured to receive newly-generated log data and issue security alerts based thereon.
    Type: Application
    Filed: December 20, 2021
    Publication date: June 22, 2023
    Inventors: Andrey KARPOVSKY, Roy LEVIN, Tamer SALMAN
  • Publication number: 20230185899
    Abstract: The processing of an incoming query targeted to a data store in a way that early detection of code injections can be achieved. Initial code injections, even if unsuccessful, can be used to adjust the code injections to more successfully perform harmful actions against the data store. Accordingly, early detection can be used to block attackers from experimenting against the data store. The early detection is accomplished by detecting when all or a portion of the query is structured in accordance with a query language, but does not follow the syntax recognized by the underlying data store. This is a good indication that the issuer of the query is performing a blind code injection, not knowing the type of the underlying data store.
    Type: Application
    Filed: December 15, 2021
    Publication date: June 15, 2023
    Inventors: Haim Saadia BENDANAN, Andrey KARPOVSKY, Inbal ARGOV