Patents by Inventor Anton Rang

Anton Rang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240086073
    Abstract: A storage system calculates relative writability of SSDs and biases storage of data from write IOs to the SSD that has the greatest relative writability, where writability is a value calculated as a function of remaining wear-life and drive capacity. When the remaining wear-life of an SSD falls below a threshold, unstable data is evicted from that drive, where data stability is an indication of likelihood of data being changed. The drive with the greatest relative writability is selected as the target for the unstable data. The drive with the greatest relative writability is also selected as the donor for stable data that is moved to the free space created by eviction of the unstable data. Consequently, the SSD that triggers the low wear-life threshold processes fewer write IOs.
    Type: Application
    Filed: September 14, 2022
    Publication date: March 14, 2024
    Applicant: Dell Products L.P.
    Inventor: Anton Rang
  • Publication number: 20230121841
    Abstract: Facilitating per-CPU reference counting for multi-core systems with a long-lived reference is provided herein. A system includes a processor and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations. The operations include determining a first quantity of releases associated with an object in a data structure of the system and determining a second quantity of acquisitions associated with the object. The first quantity of releases can be distributed among respective first counters of processing elements of a group of processing elements. The second quantity of acquisitions can be distributed among respective second counters of the processing elements of the group of processing elements. Further, the operations can include, based on the second quantity of acquisitions and the first quantity of releases being determined to be a same value, implementing a removal of the object from the data structure.
    Type: Application
    Filed: October 19, 2021
    Publication date: April 20, 2023
    Inventor: Anton Rang
  • Publication number: 20230123921
    Abstract: Facilitating the embedding of block references for reducing and/or mitigating file access latency in file systems is provided herein. A system includes a processor and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations. The operations include populating a data structure of the system with information indicative of a block pointer that identifies a first location of a first data block of an object. The first location is a location within a storage system. The operations also can include, based on a receipt of a read request for the object, enabling access to the first data block of the object based on the block pointer. Enabling access can include bypassing a reading of a block map for access to the first data block.
    Type: Application
    Filed: October 14, 2021
    Publication date: April 20, 2023
    Inventor: Anton Rang
  • Patent number: 11204873
    Abstract: Pre-decompressing a compressed form of data that has been pre-fetched into a cache to facilitate subsequent retrieval of a decompressed form of the data from the cache is presented herein. A system retrieves, from a first portion of a cache, a compression chunk comprising compressed data blocks representing a compressed form of a group of data blocks in response to a first cache hit from the first portion of the cache being incurred, decompresses the compression chunk to obtain a decompressed chunk comprising uncompressed data blocks representing an uncompressed form of the group of data blocks, and inserts the uncompressed data blocks into a second portion of the cache. Further, the system retrieves, from the second portion of the cache, an uncompressed data block of the uncompressed data blocks in response to a second cache hit from the second portion of the cache being incurred.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: December 21, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Attilio Rao, Max Laier, Anton Rang
  • Patent number: 11200004
    Abstract: Compression of data for a file system utilizing protection groups can be implemented and managed. A compression management component (CMC) can control compression of data via inline or post-process compression for storage in protection groups in memory, including determining whether to compress data, determining a compression algorithm to utilize to compress data, and/or determining whether to perform inline and/or post-process compression of data. CMC can generate protection group (PG) metadata for a PG in which compressed data is stored. PG metadata can comprise a logical extent map that describes which logical blocks contain compressed data, a list of cyclic redundancy check values for logical blocks, and a list of compression chunks that store individual metadata regarding individual compressed streams, wherein, for an individual compressed stream, the individual metadata comprises a compression format, compressed size, uncompressed size, and/or starting offset in physical space within the PG.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: December 14, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Lachlan McIlroy, Ryan Libby, Max Laier, Anton Rang
  • Publication number: 20210141729
    Abstract: Pre-decompressing a compressed form of data that has been pre-fetched into a cache to facilitate subsequent retrieval of a decompressed form of the data from the cache is presented herein. A decompression component retrieves, from a first portion of a cache, a compression chunk comprising compressed data blocks representing a compressed form of a group of data blocks in response to a first cache hit from the first portion of the cache being incurred, decompresses the compression chunk to obtain a decompressed chunk comprising uncompressed data blocks representing an uncompressed form of the group of data blocks, and inserts the uncompressed data blocks into a second portion of the cache. Further, a read component retrieves, from the second portion of the cache, an uncompressed data block of the uncompressed data blocks in response to a second cache hit from the second portion of the cache being incurred.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 13, 2021
    Inventors: Attilio Rao, Max Laier, Anton Rang
  • Publication number: 20200249877
    Abstract: Compression of data for a file system utilizing protection groups can be implemented and managed. A compression management component (CMC) can control compression of data via inline or post-process compression for storage in protection groups in memory, including determining whether to compress data, determining a compression algorithm to utilize to compress data, and/or determining whether to perform inline and/or post-process compression of data. CMC can generate protection group (PG) metadata for a PG in which compressed data is stored. PG metadata can comprise a logical extent map that describes which logical blocks contain compressed data, a list of cyclic redundancy check values for logical blocks, and a list of compression chunks that store individual metadata regarding individual compressed streams, wherein, for an individual compressed stream, the individual metadata comprises a compression format, compressed size, uncompressed size, and/or starting offset in physical space within the PG.
    Type: Application
    Filed: February 1, 2019
    Publication date: August 6, 2020
    Inventors: Lachlan McIlroy, Ryan Libby, Max Laier, Anton Rang
  • Patent number: 10148588
    Abstract: Implementations are provided herein for offering partitioned performance within a distributed file system and providing throttling at the granular level. A set of hardware and network resources available to process work items can be determined. A set of resource accounting tokens based on resource records generated when processing work items can be dynamically updated. A granular resource accounting aggregate for a customizable field of data can be selected for throttling, such as a unique user identifier, a unique group identifier, a unique client internet protocol address, a unique file, etc. A granular throttling level can then be established based on a granular throttling policy. In response to the resource accounting aggregate meeting the throttling level, the user, group, internet protocol address, etc. can be throttled at, at least one of, the cluster layer, the node layer or the protocol layer.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: December 4, 2018
    Assignee: EMC IP Holding Company LLC
    Inventors: Dan Sledz, Jonathan Walton, Daniel Powell, Anton Rang
  • Patent number: 10148531
    Abstract: Implementations are provided herein for offering partitioned performance within a distributed file system and more specifically, for offering adaptive predicted impact of resource consumption by pending work items. Core resource consumption per work item can be estimated prior to processing the work item. When processing the work item, the actual amount of resources used to process the work item can be measured and recorded. The file system can then update future estimates for performing work items based on past results. Resources made available to process future requests can be throttled based on dynamically updated estimates of resource consumption by pending work items.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: December 4, 2018
    Assignee: EMC IP Holding Company LLC
    Inventors: Dan Sledz, Jonathan Walton, Daniel Powell, Anton Rang
  • Patent number: 10142195
    Abstract: Implementations are provided herein for offering partitioned performance within a distributed file system. Core resource consumption per work item can be tracked independently. Discriminative data already known by the file system surrounding the context of the work item can be used to determine a reference resource accounting specification applicable to the work item. When processing the work item, a detailed resource record can be generated that inventories the resources used in processing the work item. The resource record associated with the work item can be recorded into a set of resource accounting tokens that track activity resource consumption at a granular level. A universal table of resource accounting tokens can be dynamically updated upon the processing of work items and generation of associated resource records throughout the distributed file system.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: November 27, 2018
    Assignee: EMC IP Holding Company LLC
    Inventors: Dan Sledz, Jonathan Walton, Daniel Powell, Anton Rang
  • Patent number: 10033620
    Abstract: Implementations are provided herein for offering partitioned performance within a distributed file system; specifically, for providing adaptive policies and leases within the partitions. An amount of resources available to a cluster of nodes operating as a distributed file system can be determined and those resources can be apportioned to individual nodes based on hardware profiles of the nodes. A set of resource accounting tokens can be dynamically updated and used as a basis to generate a cluster resource accounting aggregate, a set of node resource accounting aggregates, and a set of protocol resource accounting aggregates. The dynamically updated resource accounting aggregates can then be used to dynamically throttle resource available to process work requests at the cluster, node, and protocol head layers based on policy.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: July 24, 2018
    Assignee: EMC IP Holding Company LLC
    Inventors: Dan Sledz, Jonathan Walton, Daniel Powell, Anton Rang
  • Publication number: 20070245410
    Abstract: One embodiment of the present invention provides a system that facilitates securely forgetting a secret. During operation, the system obtains a set of secrets which are encrypted with a secret key Si, wherein the set of secrets includes a secret to be forgotten and other secrets which are to be remembered. Next, the system decrypts the secrets to be remembered using Si, and also removes the secret to be forgotten from the set of secrets. The system then obtains a new secret key Si+1, and encrypts the secrets to be remembered using Si+1. Finally, the system forgets Si.
    Type: Application
    Filed: April 17, 2006
    Publication date: October 18, 2007
    Inventors: Radia Perlman, Anton Rang
  • Patent number: 7017012
    Abstract: In a computer system, a distributed storage system having a data coherency unit for maintaining data coherency across a number of storage devices sharing such data is described. The data coherency unit includes logic to monitor data transition states in each of the data storage devices to detect when the processing status of data being shared by two or more of the storage devices changes. The data coherency unit advantageously ensures a status change in shared data in one storage device is broadcast to other storage devices having copies of the data without having each storage device independently monitor adjourning storage devices to detect data state changes.
    Type: Grant
    Filed: June 20, 2002
    Date of Patent: March 21, 2006
    Assignee: Sun Microsystems, Inc.
    Inventors: Kevin J. Clarke, Steve McPolin, Robert Gittins, Anton Rang
  • Publication number: 20030236950
    Abstract: In a computer system, a distributed storage system having a data coherency unit for maintaining data coherency across a number of storage devices sharing such data is described. The data coherency unit includes logic to monitor data transition states in each of the data storage devices to detect when the processing status of data being shared by two or more of the storage devices changes. The data coherency unit advantageously ensures a status change in shared data in one storage device is broadcast to other storage devices having copies of the data without having each storage device independently monitor adjourning storage devices to detect data state changes.
    Type: Application
    Filed: June 20, 2002
    Publication date: December 25, 2003
    Inventors: Kevin J. Clarke, Steve McPolin, Robert Gittins, Anton Rang