Patents by Inventor Vladimir Shveidel

Vladimir Shveidel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250244890
    Abstract: Techniques for using data logs with bonded page descriptors (PDs) in storage systems. The techniques include, in response to a write request from a storage client, performing, in a log structured data log, a single object allocation for a bonded page descriptor (PD)-page buffer (PB) pair, storing and persisting, in a PB of the bonded PD-PB pair, user data specified by the write request, and storing and persisting, in a PD object of the bonded PD-PB pair, metadata related to the user data. The techniques include, once the user data and related MD are stored and persisted in the bonded PD-PB pair, sending an acknowledgment message to the storage client, and de-staging, flushing, or transferring, in the background, the user data and/or related MD from the bonded PD-PB pair to a storage object. By performing a single object allocation, processing costs related to performing the object allocation can be reduced.
    Type: Application
    Filed: January 25, 2024
    Publication date: July 31, 2025
    Inventors: Vladimir Shveidel, Jenny Derzhavetz
  • Publication number: 20250231926
    Abstract: A technique of managing metadata includes receiving metadata changes into active tablets. When the active tablets are filled, the active tablets are changed to frozen tablets, whereupon previously frozen tablets are freed and changed to the active tablets. The technique further includes destaging metadata changes from the frozen tablets to pages in backend storage under read locks. Tablet generation identifiers (TGIDs) are assigned to successive generations of tablets, and a TGID of the frozen tablets is stored with the pages that are written as part of destaging from the frozen tablets.
    Type: Application
    Filed: January 17, 2024
    Publication date: July 17, 2025
    Inventors: Vladimir Shveidel, Dror Zalstein, Nimrod Shani, Jenny Derzhavetz, Michael Litvak
  • Publication number: 20250225111
    Abstract: A method in a data storage system includes foreground processes that produce incorrect-high reference counts greater than corresponding numbers of actual references to associated data instances (e.g., virtual large blocks), representing loss of storage capacity. A background process corrects a reference count for a data instance by (1) copying the data instance to a new data instance with an initial reference count of zero and making the new data instance unavailable for reclaiming, (2) scanning a set of referencing structures for all references to the data instance, and for each reference (i) replacing the reference with a new reference to the new data instance, and (ii) incrementing the reference count of the new data instance, and (3) upon completing the scanning, making the new data instance available for eventual reclaiming based on its reference count reaching zero. The process restores reference count accuracy and reduces loss/leakage of storage capacity.
    Type: Application
    Filed: January 4, 2024
    Publication date: July 10, 2025
    Inventors: Uri Shabi, Vladimir Shveidel, Jonathan Volij
  • Publication number: 20250209020
    Abstract: In at least one embodiment, processing can include: assigning services a non-critical polling priority or a critical polling priority, where each service is associated with a queue set including a completion queue (CQ) and receiving queue (RQ) associated with received messages stored in memory of the local node and sent by a remote node via remote direct memory access (RDMA); partitioning CQs of the queue sets in accordance with assigned polling priorities to generate a non-critical CQ list of and a critical CQ list; polling the non-critical CQ list by a non-critical poller at a non-critical polling frequency for completion signals or indicators associated with received non-critical messages of the local node to be serviced; and polling the critical CQ list by a critical poller at a critical polling frequency for completion signals or indicators associated with received critical messages of the local node to be serviced.
    Type: Application
    Filed: December 22, 2023
    Publication date: June 26, 2025
    Applicant: Dell Products L.P.
    Inventors: Vladimir Shveidel, Yuri Chernyavsky
  • Publication number: 20250190272
    Abstract: Per-core token pools and a shared token pool are provided, where each one of the per-core token pools corresponds to one of multiple processor cores. When the per-core token pool corresponding to the processor core used to process a received host I/O request contains at least the number of tokens required by the host I/O request, the tokens required by the host I/O request are allocated from that per-core token pool and the host I/O request is processed without accessing the shared token pool. The allocated tokens are returned to the per-core token pool until the tokens contained in the per-core target pool reaches a target quota for the per-core token pool. The remaining allocated tokens are returned to the shared token pool. Token imbalances may be addressed by periodically adjusting target quotas, and/or by allowing tokens to be dynamically allocated from another per-core token pool.
    Type: Application
    Filed: December 12, 2023
    Publication date: June 12, 2025
    Inventors: Maher Kachmar, Vamsi K. Vankamamidi, Vladimir Shveidel
  • Publication number: 20250147890
    Abstract: Techniques for processing a read I/O operation that reads first content stored at a target logical address can include: determining, using the target logical address as a first key to index into a first cache, whether the first cache includes a first cache entry caching first metadata used to access a first physical storage location including the first content stored at the target logical address; responsive to determining the first cache includes the first cache entry, determining, using the first metadata as a second key to index into a second cache, whether the second cache includes a second cache entry caching the first content stored at the target logical address; and responsive to determining the second cache includes the second entry, returning the first content from the second entry of the second cache in response to the read I/O operation.
    Type: Application
    Filed: January 9, 2025
    Publication date: May 8, 2025
    Applicant: Dell Products L.P.
    Inventors: Vladimir Shveidel, Vamsi K. Vankamamidi
  • Patent number: 12287740
    Abstract: In at least one embodiment, processing can include receiving, at a first node, a read request directed to a logical address LA owned by a second node. The first node can locally cache content and an address hint corresponding to LA. The first node can issue a request to the second node. The request can include a flag to suppress the second node from returning content stored at the target logical address. The first node can receive a response including an address used to read current content of LA1 from back-end non-volatile storage. The first node can determine whether the address matches the address hint cached on the first node. If the first node determines the address and address hint match, the cached content of LA stored on the first node is valid and can be returned in response to the read as current content stored at LA.
    Type: Grant
    Filed: June 7, 2023
    Date of Patent: April 29, 2025
    Assignee: Dell Products L.P
    Inventors: Vladimir Shveidel, Uri Shabi, Vamsi K. Vankamamidi, Samuel L. Mullis, II
  • Patent number: 12277338
    Abstract: Techniques for updating sparse metadata in a metadata delta log (MDL)-based storage system. The techniques include providing a 3-level MDL, which includes a first level for a first set of buckets initially (and temporarily) designated as “active”, a second level for a second set of buckets initially (and temporarily) designated as “de-staging”, and a third level for a third set of buckets designated as “base”. The techniques include receiving delta updates of space accounting (SA) statistics at buckets of the active set, in which the SA statistics are associated with IDs from a large sparse ID space. The techniques include, once the buckets of the active set are full, switching the “active” and “de-staging” designations of the first and second sets of buckets, respectively, and de-staging and merging, bucket-by-bucket, the SA delta updates contained in the de-staging set with any SA delta updates contained in the base set.
    Type: Grant
    Filed: December 6, 2023
    Date of Patent: April 15, 2025
    Assignee: Dell Products L.P.
    Inventors: Seman Shen, Vladimir Shveidel
  • Publication number: 20250103490
    Abstract: A method, computer program product, and computing system for receiving a metadata page for storage within a log-structured metadata store. An entry may be generated within a translation table page of a translation table within the log-structured metadata store, wherein each entry of a plurality of entries within the translation table page is associated with a separate metadata page. One or more metadata page deltas for the metadata page may be recorded in the log-structured metadata store. The one or more metadata page deltas for the metadata page may be flushed to a persistent memory storage. In response to flushing the one or more metadata page deltas for the metadata page, the entry within the translation table page that is associated with the metadata page may be updated without accessing any other entry within the translation table page associated with other metadata pages.
    Type: Application
    Filed: September 22, 2023
    Publication date: March 27, 2025
    Inventors: Christopher Seibel, Vamsi K. Vankamamidi, Christopher J. Jones, Vladimir Shveidel
  • Publication number: 20250086028
    Abstract: A percentage of the resource units contained in a shared resource pool of a data storage system is loaded into a global partition, and the resource units not loaded into the global partition are loaded into per-core partitions. Each per-core partition corresponds to one of multiple processor cores in the data storage system. Resource units are allocated from each one of the per-core partitions only to a work flow executing on the corresponding processor core. The number of available resource units in each one of the per-core partitions is periodically rebalanced by moving resource units between the global partition and that per-core partition.
    Type: Application
    Filed: September 12, 2023
    Publication date: March 13, 2025
    Inventors: Vladimir Shveidel, Jenny Derzhavetz
  • Publication number: 20250077292
    Abstract: A shared portion of processor cores is allocated within the processor cores of a data storage system. Each processor core in the shared portion of processor cores is shared between a storage system application executing in the data storage system and a containerized service also executing in the data storage system. A voluntary yield time interval is dynamically generated based on both a workload of the containerized service and a workload of the storage system application. Each processor core in the shared portion of the processor cores is periodically voluntarily yielded by the storage system application, based on the voluntary yield time interval, to allow execution of the containerized service on that processor core.
    Type: Application
    Filed: August 30, 2023
    Publication date: March 6, 2025
    Inventors: Vladimir Shveidel, Roy Koren
  • Publication number: 20250060991
    Abstract: In at least one embodiment, techniques for resource regulation and scheduling can include: allocating a first amount of tokens denoting an amount of CPU resources available for executing background (BG) maintenance tasks; determining a distribution of the first amount of tokens among a plurality of CPU cores upon which BG maintenance tasks are allowed to execute; and scheduling, by a scheduler component, a first plurality of BG maintenance tasks for execution on the plurality of CPU cores in accordance with a plurality of averages and in accordance with the distribution, wherein each of the plurality of averages denotes an average number of tokens of CPU resources consumed to complete execution of a corresponding one of the first plurality of BG maintenance tasks of a particular BG maintenance task subtype.
    Type: Application
    Filed: August 15, 2023
    Publication date: February 20, 2025
    Applicant: Dell Products L.P.
    Inventors: Vladimir Shveidel, Maher Kachmar
  • Patent number: 12223196
    Abstract: A technique for managing metadata changes in a storage system provides different aggregation policies for use with different metadata. The technique includes assigning metadata changes to respective aggregation policies and storing the assigned metadata changes in a set of data structures. The technique further includes destaging the metadata changes from the set of data structures separately for the different aggregation policies in accordance with settings specific to those aggregation policies.
    Type: Grant
    Filed: October 25, 2023
    Date of Patent: February 11, 2025
    Assignee: Dell Products L.P.
    Inventors: Jenny Derzhavetz, Vladimir Shveidel, Nimrod Shani
  • Patent number: 12222862
    Abstract: Techniques for processing a read I/O operation that reads first content stored at a target logical address can include: determining, using the target logical address as a first key to index into a first cache, whether the first cache includes a first cache entry caching first metadata used to access a first physical storage location including the first content stored at the target logical address; responsive to determining the first cache includes the first cache entry, determining, using the first metadata as a second key to index into a second cache, whether the second cache includes a second cache entry caching the first content stored at the target logical address; and responsive to determining the second cache includes the second entry, returning the first content from the second entry of the second cache in response to the read I/O operation.
    Type: Grant
    Filed: November 30, 2022
    Date of Patent: February 11, 2025
    Assignee: Dell Products L.P.
    Inventors: Vladimir Shveidel, Vamsi K. Vankamamidi
  • Patent number: 12210755
    Abstract: Nodes in a storage system can autonomously ingest I/O requests and flush data to storage. First and second nodes determine a sequence separator, the sequence separator corresponding to an entry in a page descriptor ring that separates two flushing work sets (FWS). The first node receives an input/output (I/O) request and allocates a sequence identification (ID) number to the I/O request. The first node determines a FWS for the I/O request based on the sequence separator and the sequence ID number, and commits the I/O request using the sequence ID number. The I/O request and the sequence ID number are sent to the second node.
    Type: Grant
    Filed: October 27, 2023
    Date of Patent: January 28, 2025
    Assignee: Dell Products L.P.
    Inventors: Vladimir Shveidel, Geng Han, Yousheng Liu
  • Patent number: 12204412
    Abstract: Data replication techniques can include receiving, at a source system, a write directed to a source logical device configured for asynchronous remote replication to a destination system; performing processing that flushes a transaction log entry for the write; and performing replication processing that uses a replication queue including a replication queue entry corresponding to the write that stores the first content to a logical address. The processing can create a metadata (MD) log entry in a MD log for the write responsive to determining that the write is directed to the source logical device configured for asynchronous remote replication and that the first content has not been replicated. Responsive to the first content not being in cache, the first content can be retrieved using the reference to a storage location storing the first content. The reference can be obtained from the MID log entry or the replication queue entry.
    Type: Grant
    Filed: January 10, 2023
    Date of Patent: January 21, 2025
    Assignee: Dell Products L.P.
    Inventors: Vladimir Shveidel, Vamsi K. Vankamamidi
  • Publication number: 20250021387
    Abstract: Performance metrics are continuously monitored for each processor core in a shared portion of multiple processor cores in a data storage system. Each processor core in the shared portion is shared between a storage system application located in the data storage system and a containerized service also located in the data storage system. The monitored performance metrics indicate I/O request processing latency and an amount of the processing capacity of each individual processor core in the shared portion of the processor cores that is available for use by the storage system application. Based on the performance metrics, the I/O request processing is preferentially assigned to processor cores that have relatively lower I/O request processing latency, and the background work item processing is preferentially assigned to processor cores that have relatively higher amounts of capacity available for use by the storage system application.
    Type: Application
    Filed: July 10, 2023
    Publication date: January 16, 2025
    Inventors: Vladimir Shveidel, Roy Koren
  • Publication number: 20250021229
    Abstract: In at least one embodiment, processing can include: receiving, at a first node, a read operation that reads content of a logical address, wherein a second node, but not the first node, owns the logical address; and performing optimized read processing for the read operation. The optimized read processing can include: performing, in parallel, first processing that obtains a first address hint and first content corresponding to the logical address, and second processing that obtains a second address hint corresponding to the logical address; determining whether the first and second address hints match; if the first and second address hints match, determining that first content is valid content stored at the target logical address; if the first and second address hints do not match, determining the first content is not stored at the logical address, and using the second address hint to obtain second content stored at the logical address.
    Type: Application
    Filed: July 12, 2023
    Publication date: January 16, 2025
    Applicant: Dell Products L.P.
    Inventors: Vladimir Shveidel, Nimrod Shani, Dror Zalstein
  • Patent number: 12197728
    Abstract: In at least one embodiment, processing can include: receiving, at a first node, a read operation that reads content of a logical address, wherein a second node, but not the first node, owns the logical address; and performing optimized read processing for the read operation. The optimized read processing can include: performing, in parallel, first processing that obtains a first address hint and first content corresponding to the logical address, and second processing that obtains a second address hint corresponding to the logical address; determining whether the first and second address hints match; if the first and second address hints match, determining that first content is valid content stored at the target logical address; if the first and second address hints do not match, determining the first content is not stored at the logical address, and using the second address hint to obtain second content stored at the logical address.
    Type: Grant
    Filed: July 12, 2023
    Date of Patent: January 14, 2025
    Assignee: Dell Products L.P.
    Inventors: Vladimir Shveidel, Nimrod Shani, Dror Zalstein
  • Patent number: 12197757
    Abstract: Techniques for providing a virtual federation approach to increasing efficiency of processing circuitry utilization in storage nodes with a high number of cores. The techniques include, for each of two (2) physical nodes, logically partitioning a plurality of cores into a first domain of cores and a second domain of cores. The techniques include designating the first domain of cores of each physical node as belonging to a first virtual node. The techniques include designating the second domain of cores of each physical node as belonging to a second virtual node. The techniques include operating the first virtual nodes on the two (2) underlying physical nodes as a first virtual appliance, and operating the second virtual nodes on the two (2) underlying physical nodes as a second virtual appliance. In this way, scalability and speedup efficiency can be increased in a multi-core processing environment with a high number of cores.
    Type: Grant
    Filed: October 4, 2023
    Date of Patent: January 14, 2025
    Assignee: Dell Products L.P.
    Inventors: Vladimir Shveidel, Amitai Alkalay, Steven A. Morley