Patents by Inventor Tomer Kushnir

Tomer Kushnir has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250077953
    Abstract: Methods and systems for managing evolving artificial intelligence (AI) models are disclosed. The evolving AI models may be used to generate inferences that may be provided to downstream consumers during a provisioning process. The downstream consumers may rely on the accuracy and consistency of the inferences provided during the provisioning process to provide desired computer-implemented services. The AI models may be updated (e.g., with new training data) automatically and/or frequently over time in order to increase the accuracy of inferences provided by the AI model. However, inferences provided by a newly updated instance of an AI model may be inconsistent with inferences provided by prior instances of the AI model (e.g., due to AI model poisoning). Therefore, to increase the likelihood of providing accurate and consistent (e.g., unpoisoned) inferences to the downstream consumers, an appropriate instance of the AI model may be identified for use in the provisioning process.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250080587
    Abstract: Methods and systems for managing an artificial intelligence (AI) model are disclosed. An AI model may be part of an evolving AI model pipeline, the processes of which may include obtaining training data from data sources used to update the AI model. An attacker may introduce poisoned training data via one or more of the data sources as a form of attack on the AI model. When the poisoned training data is identified, the poisoned training data may be compared to existing training data to determine the attacker's goal. Based on the attacker's goal, remedial actions may be performed that may update operation of pipeline. The updated operation of the pipeline may reduce the computational expense for remediating impact of the poisoned training data, and may reduce the likelihood of obtaining poisoned training data in the future.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250077954
    Abstract: Methods and systems for managing artificial intelligence (AI) models are disclosed. To manage AI models, an instance of an AI model may not be re-trained using training data determined to be potentially poisoned. By doing so, malicious attacks intending influence the AI model in a using poisoned training data may be prevented. To do so, a first causal relationship present in historical training data may be compared to a second causal analysis present in a candidate training data set. The first causal relationship and the second causal relationship may be expected to be similar within a threshold. If a difference between the first causal relationship and the second causal relationship is not within the threshold, the candidate training data may be treated as including poisoned training data.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250077910
    Abstract: Methods and systems for managing impact of inferences provided to inference consumers on decisions made by the inference consumers are disclosed. Poisoned training data may be introduced and used to train an AI model, which may then poison the AI model and lead to poisoned inferences being provided to the inference consumers. Inference consumers may deploy hardware to customers based on the poisoned inferences. To determine whether to modify the deployed hardware, a performance cost associated with the deployed hardware may be obtained. The performance cost may indicate a deviation between operation of the deployed hardware and operation of hardware that may have been deployed if an unpoisoned inference was used. If the performance cost meets a performance cost threshold, at least one additional hardware component may be deployed to the customer.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250077650
    Abstract: Methods and systems for managing impact of inferences provided to inference consumers are disclosed. An artificial intelligence (AI) model may be poisoned by poisoned training data and may provide poisoned inferences to an inference consumer. To determine whether to remediate the poisoned inference, first use of the poisoned inference may be compared to second use of a second inference generated by a second AI model that is not believed to be poisoned. The first use and the second use may be the same type of use and a deviation between the first use and the second use may indicate an extent to which the poisoned inference impacted the inference consumer. A quantification of the deviation may be obtained and compared to a quantification threshold. If the quantification meets the quantification threshold, an action set may be performed to remediate the impact of the poisoned inference.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250077659
    Abstract: Methods and systems for managing inferences throughout a distributed environment are disclosed. Poisoned training data may be introduced and used to train an AI model, which may then poison the AI model and lead to poisoned inferences being provided to the inference consumers. Entities may submit challenges alleging that decisions made by the inference consumers are due to consumption of the poisoned inferences. To respond to the challenges, a replacement inference may be generated and consumed by a digital twin of the inference consumers. A quantification of deviation of operation between the inference consumers after consuming the poisoned inference and operation of the digital twin after consuming the replacement inference may be obtained and included in a response to the challenge. The response may also include an extent of agreement or disagreement with the allegation.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250077713
    Abstract: Methods and systems for managing artificial intelligence (AI) models are disclosed. To manage AI models, an instance of an AI model may not be re-trained using training data determined to be potentially poisoned. By doing so, malicious attacks intending to influence the AI model using poisoned training data may be prevented. To do so, a first level of strength of a first causal relationship present in historical training data may be compared to a second level of strength of a second causal relationship present in a candidate training data set. The first level of strength and the second level of strength may be expected to be similar within a threshold. If a difference between the first level of strength and the second level of strength is not within the threshold, the candidate training data may be treated as including poisoned training data.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250077955
    Abstract: Methods and systems for managing artificial intelligence (AI) models are disclosed. To manage AI models, an instance of an AI model may not be re-trained using training data determined to be too similar to previously used training data. By doing so, malicious attacks intending to shift the AI model in a particular direction using poisoned training data may be prevented. To do so, a clustering analysis may be performed using a candidate training data and variable clustering criteria prior to performing re-training of an instance of an AI model using the candidate training data set. The analysis may result in a score. If the score exceeds a score threshold, the candidate training data set may be considered to contain poisoned training data. If the score does not exceed the score threshold, the candidate training data set may be accepted as usable to train an instance of the AI model.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250077657
    Abstract: Methods and systems for managing an artificial intelligence (AI) model are disclosed. An AI model may be part of an evolving AI model pipeline, the processes of which may include obtaining training data from data sources used to update the AI model. An attacker may introduce poisoned training data via one or more of the data sources as a form of attack on the AI model. When the poisoned training data is identified, the one or more data sources that supplied the training data may be identified and analyzed to determine the attacker's level of view into the pipeline. Based on the attacker's level of view, remedial actions may be performed that may update operation of pipeline. The updated operation of the pipeline may reduce the computational expense for remediating impact of the poisoned training data, and may reduce the likelihood of obtaining poisoned training data in the future.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250077656
    Abstract: Methods and systems for managing impact of inferences provided to inference consumers on the operation of the inference consumers are disclosed. Poisoned training data may be introduced and used to train an AI model, which may then poison the AI model and lead to poisoned inferences being provided to the inference consumers. To determine whether to remediate the poisoned inferences, a replacement inference may be generated and consumed by a digital twin of the inference consumers. A quantification of deviation of operation between the inference consumers after consuming the poisoned inference and operation of the digital twin after consuming the replacement inference may be compared to a threshold. If the quantification meets the threshold, an action set may be performed to remediate the impact of the poisoned inference.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Patent number: 12235999
    Abstract: Methods and systems for managing artificial intelligence (AI) models are disclosed. To manage AI models, poisoned training data introduced into an instance of the AI models may be identified and the impact of the poisoned training data on the AI models may be efficiently mitigated. To do so, a first poisoned AI model instance may be obtained. Rather than re-training an un-poisoned AI model instance to remove the impact of poisoned training data, the first poisoned AI model instance may be selectively un-trained whenever poisoned training data is found in the training dataset. Subsequently, weights of the first poisoned AI model instance may be adjusted to account for future training data. As poisoned training data may occur infrequently, selectively un-training the AI model may conserve computing resources and minimize AI model downtime when compared to a full or partial re-training process of an un-poisoned AI model instance.
    Type: Grant
    Filed: December 29, 2022
    Date of Patent: February 25, 2025
    Assignee: Dell Products L.P.
    Inventors: Ofir Ezrielev, Amihai Savir, Tomer Kushnir
  • Patent number: 12175008
    Abstract: Methods and systems for managing artificial intelligence (AI) models are disclosed. To manage AI models, snapshots of the AI models may be obtained. The snapshot may include information regarding the structure of the AI model, information regarding the inferences obtained from the AI model, and/or information regarding training data used to train the AI model. When the identification of poisoned training data is made, an instance of an AI model trained with the training data may be tainted. To repair the tainted AI model, another instance of the AI model that is not tainted may be identified using the AI model snapshots. The untainted AI model may then be updated using training data that is not poisoned. By doing so, the effect of poisoned training data may be efficiently computationally mitigated.
    Type: Grant
    Filed: December 29, 2022
    Date of Patent: December 24, 2024
    Assignee: Dell Products L.P.
    Inventors: Ofir Ezrielev, Amihai Savir, Tomer Kushnir
  • Publication number: 20240362145
    Abstract: Methods and systems for managing data processing systems are disclosed. The data processing systems may be managed through identification and remediation of latency discrepancies on the data processing systems. The identification of latency discrepancies on data processing systems is facilitated by monitoring latency in execution of real-world devices on data processing systems and comparing to model prediction of latency of the devices. Execution of the devices and model prediction may utilize the same application pathway. In utilizing the same application pathway, the source of latency affecting the real-world device, called the negative communications modifier, may be isolated.
    Type: Application
    Filed: April 28, 2023
    Publication date: October 31, 2024
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, MAXIM BALIN
  • Publication number: 20240362339
    Abstract: Methods and systems for securing deployments are disclosed. The deployments may be secured by generating and deploying security models to components of the deployment. The security models may be obtained through simulation of the operation of the deployment. During the simulation, different types of attacks on its operation and potential defenses to the attacks may be evaluated. The defenses able to defend against the different types of attacks may be used to generate the security models.
    Type: Application
    Filed: April 28, 2023
    Publication date: October 31, 2024
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, MAXIM BALIN
  • Publication number: 20240362144
    Abstract: Methods and systems for managing performance of data processing systems throughout a distributed environment are disclosed. To manage performance of data processing systems, a system may include a data processing system manager and one or more data processing systems. If operation of a data processing system of the one or more data processing systems meets certain criteria, the data processing system manager may operate a digital twin of the data processing system. Simulated operational data obtained from the digital twin may be compared to operational data from the data processing system. If the simulated operational data and the operational data match within a threshold, the data processing system meeting the criteria may be due to environmental conditions. If the simulated operational data does not match the operational data within the threshold, the data processing system may be deteriorating and may require further intervention.
    Type: Application
    Filed: April 28, 2023
    Publication date: October 31, 2024
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, MAXIM BALIN
  • Publication number: 20240362561
    Abstract: Methods and systems for managing data workflows performed by data processing systems throughout a distributed environment are disclosed. To manager data workflows, a system may include a data workflow manager and one or more data processing systems. The data workflow manager may host and operate a digital twin intended to duplicate operation of each corresponding data processing system involved in a data workflow. In the event of a loss of functionality of a data processing system involved in the data workflow, the data workflow manager may initiate operation of a corresponding digital twin and re-route the data workflow through the digital twin to facilitate continued performance of computer-implemented services based on the data workflow. When a replacement data processing system becomes available, the replacement data processing system may be inserted into the data workflow and the digital twin may no longer be used.
    Type: Application
    Filed: April 28, 2023
    Publication date: October 31, 2024
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, MAXIM BALIN
  • Publication number: 20240362010
    Abstract: Methods and systems for managing operation of data processing systems with limited access to an uplink pathway are disclosed. To manage the operation, a system may include a data processing system manager, a data collector, and one or more data processing systems. The data processing system manager may identify future events that may impact operation of the data processing system using a digital twin and observational data. A cache hosted by the data processing system may store events and commands associated with the events. The commands may include action sets intended to mitigate impacts of the events. If the data processing system does not have commands associated with the simulated future events stored in the cache, the data processing system manager may provide instructions for replacing at least a portion of the commands stored in the cache with commands responsive to the simulated future events.
    Type: Application
    Filed: April 28, 2023
    Publication date: October 31, 2024
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, MAXIM BALIN
  • Publication number: 20240362338
    Abstract: Methods and systems for monitoring security of data processing systems throughout a distributed environment are disclosed. To monitor security of data processing systems, a system may include a security manager and one or more data processing systems. The security manager may host a digital twin of each data processing system to simulate operations performed by the corresponding data processing system. The security manager may compare operations performed by a data processing system to operations performed by a digital twin of the data processing system. Differences in the operations performed by the data processing system and the digital twin may indicate the presence of adversarial interference with the data processing system. Data processing systems found to be performing unexpected operations may be subject to further analysis and, if needed, remedial action.
    Type: Application
    Filed: April 28, 2023
    Publication date: October 31, 2024
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, MAXIM BALIN
  • Publication number: 20240364753
    Abstract: Methods and systems for securing deployments are disclosed. The deployments may be secured by generating and deploying security models to components of the deployment. The security models may be obtained through simulation of the operation of the deployment. During the simulation, predictions of different types of attacks and the potential defenses to the attacks on its operation may be evaluated. Further, limits may be imposed on the different attacks and potential defenses to simulate various scenarios that may be encountered in real systems.
    Type: Application
    Filed: April 28, 2023
    Publication date: October 31, 2024
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, MAXIM BALIN
  • Publication number: 20240364534
    Abstract: Methods and systems for securing data processing systems are disclosed. The data processing systems may be secured through analysis of recognition and simulation of commands that may be run on data processing systems. An analysis of recognition of the command may be used to query if the command has already been simulated for its effects on the data processing system. If the command may have already simulated the command, then the data processing system may know whether to execute the command. Conversely, if the command may not have been simulated, the command may be simulated to understand the effects on the data processing system. In understanding the effects of the command, the data processing system may formulate whether to execute the command.
    Type: Application
    Filed: April 28, 2023
    Publication date: October 31, 2024
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, MAXIM BALIN