Patents by Inventor Ofir Ezrielev

Ofir Ezrielev has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12271274
    Abstract: One method includes listening, by a storage vault, to a port that is specific to a particular data structure in the storage vault, determining that an air gap between the storage vault and an entity external to the storage vault, is closed, such that communication between the storage vault and the external entity, by way of the port, is enabled, signaling, by the storage vault to the external entity, that the air gap is closed, and receiving, at the storage vault by way of the port, data from the external entity.
    Type: Grant
    Filed: October 4, 2022
    Date of Patent: April 8, 2025
    Assignee: Dell Products L.P.
    Inventors: Ofir Ezrielev, Jehuda Shemer, Amihai Savir
  • Patent number: 12253905
    Abstract: Methods and systems for managing operation of a data pipeline are disclosed. To manage the operation, a system may include one or more data sources, a data manager, and one or more downstream consumers. Requests for data from the downstream consumers may have unexpected characteristics that may cause misalignment of application programming interfaces used by the data pipeline. To remediate the misalignment and reduce occurrences of future misalignments, an error message may be obtained indicating a type of error associated with the request. The error message may be used to obtain an error classification for the request and an action set may be performed based on the error classification. In addition, data provided to the downstream consumers may cause misalignment of an application programming interface used by the downstream consumers. Similarly, an error message may be obtained and used to identify an appropriate action set to remediate the misalignment.
    Type: Grant
    Filed: June 29, 2023
    Date of Patent: March 18, 2025
    Assignee: Dell Products L.P.
    Inventors: Ofir Ezrielev, Hanna Yehuda, Kristen Jeanne Walsh
  • Patent number: 12254109
    Abstract: Methods and systems for managing access to data stored in data storage systems are disclosed. An end device and/or user thereof may require access to sensitive data of varying sensitivity levels stored in a data storage system. To prevent malicious parties from gaining access to the sensitive data, an access control system may be implemented. The access control system may include a registration process that registers end device and user combinations and assigns cryptographic key pairs to each registered combination. The key pairs may be generated using information specific to the sensitivity level of the data and managed using a key tree structure. Before sensitive data may be accessed, a requesting device and its associated user may be authenticated using the key pairs generated during registration. The sensitive data may be encrypted using sensitivity level and device-specific encryption.
    Type: Grant
    Filed: February 28, 2023
    Date of Patent: March 18, 2025
    Assignee: Dell Products L.P.
    Inventors: Ofir Ezrielev, Naor Radami, Amos Zamir
  • Publication number: 20250086065
    Abstract: One example method includes assigning, at a production site, a priority to a portion of a dataset to be backed up, checking to determine if the priority meets or exceeds a threshold priority; and, when the priority meets or exceeds the threshold priority, and when an air gap between the production site and a storage vault is closed, backing up, by way of the closed air gap, the portion of the dataset to the storage vault.
    Type: Application
    Filed: November 27, 2024
    Publication date: March 13, 2025
    Inventors: Ofir Ezrielev, Jehuda Shemer, Amihai Savir
  • Patent number: 12248374
    Abstract: Last resort access to backups is disclosed. An encrypted backup associated with a first system or vault is stored in the backup associated with another system. If a key needed to decrypt a backup in the first vault is unavailable, an encrypted copy of a backup in the second vault may be used for the recovery operation. Incremental backups from the first and/or second vault, which may be difference incremental backups and may be unencrypted, may be used in the recovery operation.
    Type: Grant
    Filed: April 27, 2023
    Date of Patent: March 11, 2025
    Assignee: Dell Products L.P.
    Inventors: Ofir Ezrielev, Yehiel Zohar, Lee Serfaty
  • Patent number: 12249070
    Abstract: Methods and systems for identifying areas of interest in an image and management of images are disclosed. To manage identification of areas of interest in an image, subject matter expert driven processes may be used to identify the areas of interest. The identified areas of interest may be used to establish plans to guide subsequent use of the image. The identified areas of interest may also be used to establish plans to cache portions of the image to speed subsequent use of the image.
    Type: Grant
    Filed: July 25, 2022
    Date of Patent: March 11, 2025
    Assignee: Dell Products L.P.
    Inventors: Ofir Ezrielev, Amihai Savir, Avitan Gefen, Nicole Reineke
  • Publication number: 20250077650
    Abstract: Methods and systems for managing impact of inferences provided to inference consumers are disclosed. An artificial intelligence (AI) model may be poisoned by poisoned training data and may provide poisoned inferences to an inference consumer. To determine whether to remediate the poisoned inference, first use of the poisoned inference may be compared to second use of a second inference generated by a second AI model that is not believed to be poisoned. The first use and the second use may be the same type of use and a deviation between the first use and the second use may indicate an extent to which the poisoned inference impacted the inference consumer. A quantification of the deviation may be obtained and compared to a quantification threshold. If the quantification meets the quantification threshold, an action set may be performed to remediate the impact of the poisoned inference.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250077954
    Abstract: Methods and systems for managing artificial intelligence (AI) models are disclosed. To manage AI models, an instance of an AI model may not be re-trained using training data determined to be potentially poisoned. By doing so, malicious attacks intending influence the AI model in a using poisoned training data may be prevented. To do so, a first causal relationship present in historical training data may be compared to a second causal analysis present in a candidate training data set. The first causal relationship and the second causal relationship may be expected to be similar within a threshold. If a difference between the first causal relationship and the second causal relationship is not within the threshold, the candidate training data may be treated as including poisoned training data.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250080587
    Abstract: Methods and systems for managing an artificial intelligence (AI) model are disclosed. An AI model may be part of an evolving AI model pipeline, the processes of which may include obtaining training data from data sources used to update the AI model. An attacker may introduce poisoned training data via one or more of the data sources as a form of attack on the AI model. When the poisoned training data is identified, the poisoned training data may be compared to existing training data to determine the attacker's goal. Based on the attacker's goal, remedial actions may be performed that may update operation of pipeline. The updated operation of the pipeline may reduce the computational expense for remediating impact of the poisoned training data, and may reduce the likelihood of obtaining poisoned training data in the future.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250077656
    Abstract: Methods and systems for managing impact of inferences provided to inference consumers on the operation of the inference consumers are disclosed. Poisoned training data may be introduced and used to train an AI model, which may then poison the AI model and lead to poisoned inferences being provided to the inference consumers. To determine whether to remediate the poisoned inferences, a replacement inference may be generated and consumed by a digital twin of the inference consumers. A quantification of deviation of operation between the inference consumers after consuming the poisoned inference and operation of the digital twin after consuming the replacement inference may be compared to a threshold. If the quantification meets the threshold, an action set may be performed to remediate the impact of the poisoned inference.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250077955
    Abstract: Methods and systems for managing artificial intelligence (AI) models are disclosed. To manage AI models, an instance of an AI model may not be re-trained using training data determined to be too similar to previously used training data. By doing so, malicious attacks intending to shift the AI model in a particular direction using poisoned training data may be prevented. To do so, a clustering analysis may be performed using a candidate training data and variable clustering criteria prior to performing re-training of an instance of an AI model using the candidate training data set. The analysis may result in a score. If the score exceeds a score threshold, the candidate training data set may be considered to contain poisoned training data. If the score does not exceed the score threshold, the candidate training data set may be accepted as usable to train an instance of the AI model.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250077713
    Abstract: Methods and systems for managing artificial intelligence (AI) models are disclosed. To manage AI models, an instance of an AI model may not be re-trained using training data determined to be potentially poisoned. By doing so, malicious attacks intending to influence the AI model using poisoned training data may be prevented. To do so, a first level of strength of a first causal relationship present in historical training data may be compared to a second level of strength of a second causal relationship present in a candidate training data set. The first level of strength and the second level of strength may be expected to be similar within a threshold. If a difference between the first level of strength and the second level of strength is not within the threshold, the candidate training data may be treated as including poisoned training data.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250077657
    Abstract: Methods and systems for managing an artificial intelligence (AI) model are disclosed. An AI model may be part of an evolving AI model pipeline, the processes of which may include obtaining training data from data sources used to update the AI model. An attacker may introduce poisoned training data via one or more of the data sources as a form of attack on the AI model. When the poisoned training data is identified, the one or more data sources that supplied the training data may be identified and analyzed to determine the attacker's level of view into the pipeline. Based on the attacker's level of view, remedial actions may be performed that may update operation of pipeline. The updated operation of the pipeline may reduce the computational expense for remediating impact of the poisoned training data, and may reduce the likelihood of obtaining poisoned training data in the future.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250077659
    Abstract: Methods and systems for managing inferences throughout a distributed environment are disclosed. Poisoned training data may be introduced and used to train an AI model, which may then poison the AI model and lead to poisoned inferences being provided to the inference consumers. Entities may submit challenges alleging that decisions made by the inference consumers are due to consumption of the poisoned inferences. To respond to the challenges, a replacement inference may be generated and consumed by a digital twin of the inference consumers. A quantification of deviation of operation between the inference consumers after consuming the poisoned inference and operation of the digital twin after consuming the replacement inference may be obtained and included in a response to the challenge. The response may also include an extent of agreement or disagreement with the allegation.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250077953
    Abstract: Methods and systems for managing evolving artificial intelligence (AI) models are disclosed. The evolving AI models may be used to generate inferences that may be provided to downstream consumers during a provisioning process. The downstream consumers may rely on the accuracy and consistency of the inferences provided during the provisioning process to provide desired computer-implemented services. The AI models may be updated (e.g., with new training data) automatically and/or frequently over time in order to increase the accuracy of inferences provided by the AI model. However, inferences provided by a newly updated instance of an AI model may be inconsistent with inferences provided by prior instances of the AI model (e.g., due to AI model poisoning). Therefore, to increase the likelihood of providing accurate and consistent (e.g., unpoisoned) inferences to the downstream consumers, an appropriate instance of the AI model may be identified for use in the provisioning process.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Publication number: 20250077910
    Abstract: Methods and systems for managing impact of inferences provided to inference consumers on decisions made by the inference consumers are disclosed. Poisoned training data may be introduced and used to train an AI model, which may then poison the AI model and lead to poisoned inferences being provided to the inference consumers. Inference consumers may deploy hardware to customers based on the poisoned inferences. To determine whether to modify the deployed hardware, a performance cost associated with the deployed hardware may be obtained. The performance cost may indicate a deviation between operation of the deployed hardware and operation of hardware that may have been deployed if an unpoisoned inference was used. If the performance cost meets a performance cost threshold, at least one additional hardware component may be deployed to the customer.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: OFIR EZRIELEV, TOMER KUSHNIR, AMIHAI SAVIR
  • Patent number: 12242347
    Abstract: Methods and systems for device shutdown in a deployment are disclosed. Device shutdown may be considered to conserve energy and simplify processes in a deployment. To conserve energy and simplify processes, all devices within a deployment may undergo a redundancy analysis and qualification analysis. The redundancy analysis may produce lists of redundant and non-redundant devices. All redundant devices may be candidates for device shutdown. Next, qualification analysis may qualify devices for shutdown by energy consumption and output data accuracy and uncertainty qualification. Devices that may not meet prescribed qualifiers may also be candidates for shutdown. With all devices that may be candidates for shutdown assembled in a list, device shutdown may commence in the deployment.
    Type: Grant
    Filed: June 30, 2023
    Date of Patent: March 4, 2025
    Assignee: Dell Products L.P.
    Inventors: Ofir Ezrielev, Boris Shpilyuck, Igor Dubrovsky, Nisan Haimov
  • Patent number: 12244511
    Abstract: Methods and systems for managing data collection throughout a distributed environment are disclosed. To manage data collection, a system may include a data aggregator and a data collector. The data collector may utilize a consensus sequence to generate reduced-size data transmissions. The consensus sequence may be made up of patterns of data that occur frequently in data collected by the data collector. Therefore, data collected by the data collector may be condensed by replacing segments of the data with pointer pairs, pointer pairs being indicators of a portion of the consensus sequence that matches a segments of data. The data collector may transmit these pointer pairs, along with any additional segments of data, to the data aggregator instead of transmitting full data sets. The data aggregator may reconstruct data from the data collectors using the reduced-size data and the consensus sequence.
    Type: Grant
    Filed: April 21, 2022
    Date of Patent: March 4, 2025
    Assignee: Dell Products L.P.
    Inventors: Ofir Ezrielev, Jehuda Shemer
  • Patent number: 12242447
    Abstract: Methods and systems for managing operation of a data pipeline are disclosed. To manage the operation, a system may include one or more data sources, a data manager, and one or more downstream consumers. Interruptions to the operation may impact provision of data processing services by the data pipeline and may cause the data processing services to no longer align with operation quality goals for the data pipeline. To maintain compliance with the operation quality goals, the operation may be monitored over time. Operation data may be obtained for the data pipeline and may be used to determine representations of operation quality of the data pipeline. The representations of operation quality of the data pipeline may be compared to the operation quality goals and actions may be performed to remediate differences between the representations of operation quality of the data pipeline and the operation quality goals.
    Type: Grant
    Filed: June 29, 2023
    Date of Patent: March 4, 2025
    Assignee: Dell Products L.P.
    Inventors: Ofir Ezrielev, Hanna Yehuda, Kristen Jeanne Walsh
  • Patent number: 12235999
    Abstract: Methods and systems for managing artificial intelligence (AI) models are disclosed. To manage AI models, poisoned training data introduced into an instance of the AI models may be identified and the impact of the poisoned training data on the AI models may be efficiently mitigated. To do so, a first poisoned AI model instance may be obtained. Rather than re-training an un-poisoned AI model instance to remove the impact of poisoned training data, the first poisoned AI model instance may be selectively un-trained whenever poisoned training data is found in the training dataset. Subsequently, weights of the first poisoned AI model instance may be adjusted to account for future training data. As poisoned training data may occur infrequently, selectively un-training the AI model may conserve computing resources and minimize AI model downtime when compared to a full or partial re-training process of an un-poisoned AI model instance.
    Type: Grant
    Filed: December 29, 2022
    Date of Patent: February 25, 2025
    Assignee: Dell Products L.P.
    Inventors: Ofir Ezrielev, Amihai Savir, Tomer Kushnir