Patents by Inventor Michael Scharland

Michael Scharland has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11972112
    Abstract: A host IO devices directly implements host read operations on both local memory, and on peer memory via a PCIe non-transparent bridge. When a host read operation is received by a host IO device from a host, the host IO device uses an API to obtain the physical address of the requested data on the peer memory, and generates a PCIe Transaction Layer Packet (TLP) addressed to the address in the peer memory. The TLP addressed to an address in the peer memory is passed over the NTB to the peer compute node to retrieve the data stored in the addressed slot of peer memory. The requested data is returned to the host IO device over the NTB, stored in a buffer, and read out to the host to directy respond to the host read operation.
    Type: Grant
    Filed: January 27, 2023
    Date of Patent: April 30, 2024
    Assignee: Dell Products, L.P.
    Inventors: Jonathan Krasner, Ro Monserrat, Michael Scharland, Jerome Cartmell, James M Guyer, Scott Rowlands, Julie Zhivich, Thomas Mackintosh
  • Publication number: 20240126437
    Abstract: A storage system is configured to accept subsequent versions of write data on a given track to multiple respective slots of shared global memory. A track index table presents metadata at the track level, and can hold up to N slots of data. All slots of shared global memory holding data owed to the source volume and to snapshots of the source volume are bound to the track in the track index table. Each time a write occurs on a track, the track index table is used to determine when a write pending slot for the track is owed to a snapshot copy of the storage volume. When a write pending slot contains data that is owed to a snapshot copy of the source volume, a new slot is allocated to the write IO and bound to the track in the track index table.
    Type: Application
    Filed: October 14, 2022
    Publication date: April 18, 2024
    Inventors: Sandeep Chandrashekhara, Mark Halstead, Michael Ferrari, Rong Yu, Michael Scharland
  • Patent number: 11954079
    Abstract: The meta data containing count and key fields of CKD records are reversibly decoupled from the user data of the data field so that the data can be deduplicated. Multiple CKD records may be coalesced into a larger size CKD track. The coalesced meta data is compressed and stored in a CKD hash table. The user data is hashed, and the hash is used as a hash key that is associated with the compressed meta data in the CKD hash table. When the hash of user data associated with a CKD write IO matches the hash key of an existing entry in the table, data duplication is indicated. The compressed meta data is added to the entry and the user data is deduplicated by creating storage system meta data that points to the pre-existing copy of the user data. The storage system metadata includes unique information that enables the corresponding compressed metadata to be subsequently located in the hash table to reassemble the CKD records.
    Type: Grant
    Filed: June 15, 2022
    Date of Patent: April 9, 2024
    Assignee: Dell Products L.P.
    Inventors: Ramesh Doddaiah, Richard Goodwill, Jeremy O'Hare, Michael Scharland, Mohammed Asher
  • Publication number: 20230409544
    Abstract: The meta data containing count and key fields of CKD records are reversibly decoupled from the user data of the data field so that the data can be deduplicated. Multiple CKD records may be coalesced into a larger size CKD track. The coalesced meta data is compressed and stored in a CKD hash table. The user data is hashed, and the hash is used as a hash key that is associated with the compressed meta data in the CKD hash table. When the hash of user data associated with a CKD write IO matches the hash key of an existing entry in the table, data duplication is indicated. The compressed meta data is added to the entry and the user data is deduplicated by creating storage system meta data that points to the pre-existing copy of the user data. The storage system metadata includes unique information that enables the corresponding compressed metadata to be subsequently located in the hash table to reassemble the CKD records.
    Type: Application
    Filed: June 15, 2022
    Publication date: December 21, 2023
    Inventors: Ramesh Doddaiah, Richard Goodwill, Jeremy O'Hare, Michael Scharland, Mohammed Asher
  • Patent number: 11762556
    Abstract: A method, computer program product, and computer system for receiving, by a computing device, an I/O request. It may be identified whether the I/O request is eligible for handling via a first path without also requiring handling via a second path. If the I/O request is eligible, the I/O request may be processed via the first path on a host I/O stack without processing the I/O request via the second path on a storage array I/O stack. If the I/O request is ineligible, the I/O request may be processed via the first path on the host.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: September 19, 2023
    Assignee: EMC IP Holding Company, LLC
    Inventors: Adnan Sahin, Michael Scharland, Robert DeCrescenzo, Steven T. McClure, James Marriott Guyer, Jason J. Duquette
  • Patent number: 11755216
    Abstract: Aspects of the present disclosure relate to data cache management. In embodiments, a logical block address (LBA) bucket is established with at least one logical LBA group. Additionally, at least one LBA group is associated with two or more distinctly sized cache slots based on an input/output (IO) workload received by the storage array. Further, the association includes binding the two or more distinctly sized cache slots with at least one LBA group and mapping the bound distinctly sized cache slots in a searchable data structure. Furthermore, the searchable data structure identifies relationships between slot pointers and key metadata.
    Type: Grant
    Filed: January 28, 2022
    Date of Patent: September 12, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Michael Scharland, Mark Halstead, Rong Yu, Peng Wu, Benjamin Yoder
  • Patent number: 11687443
    Abstract: The present disclosure relates to one or more memory management techniques. In embodiments, one or more regions of storage class memory (SCM) of a storage array is provisioned as expanded global memory. The one or more regions can correspond to SCM persistent cache memory regions. The storage array's global memory and expanded global memory can be used to execute one or more storage-related services connected to servicing (e.g., executing) an input/output (IO) operation.
    Type: Grant
    Filed: January 27, 2021
    Date of Patent: June 27, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Owen Martin, Michael Scharland, Earl Medeiros, Parmeshwr Prasad
  • Patent number: 11599461
    Abstract: Aspects of the present disclosure relate to data cache management. In embodiments, a storage array's memory is provisioned with cache memory, wherein the cache memory includes one or more sets of distinctly sized cache slots. Additionally, a logical storage volume (LSV) is established with at least one logical block address (LBA) group. Further, at least one of the LSV's LBA groups is associated with two or more distinctly sized cache slots based on an input/output (IO) workload received by the storage array.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: March 7, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Michael Scharland, Mark Halstead, Rong Yu, Peng Wu, Benjamin Yoder, Kaustubh Sahasrabudhe
  • Publication number: 20230023314
    Abstract: Aspects of the present disclosure relate to data cache management. In embodiments, a storage array's memory is provisioned with cache memory, wherein the cache memory includes one or more sets of distinctly sized cache slots. Additionally, a logical storage volume (LSV) is established with at least one logical block address (LBA) group. Further, at least one of the LSV's LBA groups is associated with two or more distinctly sized cache slots based on an input/output (IO) workload received by the storage array.
    Type: Application
    Filed: July 26, 2021
    Publication date: January 26, 2023
    Applicant: EMC IP Holding Company LLC
    Inventors: Michael Scharland, Mark Halstead, Rong Yu, Peng Wu, Benjamin Yoder, Kaustubh Sahasrabudhe
  • Publication number: 20230021424
    Abstract: Aspects of the present disclosure relate to data cache management. In embodiments, a logical block address (LBA) bucket is established with at least one logical LBA group. Additionally, at least one LBA group is associated with two or more distinctly sized cache slots based on an input/output (IO) workload received by the storage array. Further, the association includes binding the two or more distinctly sized cache slots with at least one LBA group and mapping the bound distinctly sized cache slots in a searchable data structure. Furthermore, the searchable data structure identifies relationships between slot pointers and key metadata.
    Type: Application
    Filed: January 28, 2022
    Publication date: January 26, 2023
    Applicant: Dell Products L.P.
    Inventors: Michael Scharland, Mark Halstead, Rong Yu, Peng Wu, Benjamin Yoder
  • Patent number: 11556473
    Abstract: Embodiments of the present disclosure relate to cache memory management. One or more global caches are dynamically partitioned and sized into one or more cache partitions based on anticipated input/output (IO) workloads.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: January 17, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Owen Martin, Vladimir Desyatov, Michael Scharland
  • Publication number: 20220237112
    Abstract: The present disclosure relates to one or more memory management techniques. In embodiments, one or more regions of storage class memory (SCM) of a storage array is provisioned as expanded global memory. The one or more regions can correspond to SCM persistent cache memory regions. The storage array's global memory and expanded global memory can be used to execute one or more storage-related services connected to servicing (e.g., executing) an input/output (IO) operation.
    Type: Application
    Filed: January 27, 2021
    Publication date: July 28, 2022
    Applicant: EMC IP Holding Company LLC
    Inventors: Owen Martin, Michael Scharland, Earl Medeiros, Parmeshwr Prasad
  • Publication number: 20220035743
    Abstract: Embodiments of the present disclosure relate to cache memory management. One or more global caches are dynamically partitioned and sized into one or more cache partitions based on anticipated input/output (IO) workloads.
    Type: Application
    Filed: July 30, 2020
    Publication date: February 3, 2022
    Applicant: EMC IP Holding Company LLC
    Inventors: Owen Martin, Vladimir Desyatov, Michael Scharland
  • Publication number: 20210382629
    Abstract: A method, computer program product, and computer system for receiving, by a computing device, an I/O request. It may be identified whether the I/O request is eligible for handling via a first path without also requiring handling via a second path. If the I/O request is eligible, the I/O request may be processed via the first path on a host I/O stack without processing the I/O request via the second path on a storage array I/O stack.
    Type: Application
    Filed: August 25, 2021
    Publication date: December 9, 2021
    Inventors: Adnan Sahin, Michael Scharland, Robert DeCrescenzo, Steven T. McClure, James Marriott Guyer, Jason J. Duquette
  • Patent number: 11144445
    Abstract: Within a storage array, allocation of physical storage capacity within a storage array may be managed in standard size allocation units of uncompressed data, e.g. 128kb tracks, while smaller sub-allocation unit compression domains, e.g. 32kb quarter tracks, are used for compressed data. The data within a sub-allocation unit may be compressed to a size that is less than the capacity of the sub-allocation unit. Data associated with sub-allocation units that are not required to service a read or write may not need to be compressed or decompressed in order to service the read or write. Consequently, resource usage may be more efficient.
    Type: Grant
    Filed: March 28, 2016
    Date of Patent: October 12, 2021
    Assignee: Dell Products L.P.
    Inventors: Rong Yu, Michael Scharland, Jeremy O'Hare
  • Patent number: 11144396
    Abstract: Disks of equal storage capacity in a disk cluster have M*W partitions, where RAID width W=D+P. RAID (D+P) protection groups are implemented on the disks with protection group members stored in individual partitions. An amount of storage capacity equal to the storage capacity of one disk is distributed across multiple disks in spare partitions. The spare partitions may be distributed such that no more than one spare partition resides on a single disk. M may be selected to optimize rebuild latency. The protection group members of a failed disk are rebuilt in the distributed spare partitions and subsequently relocated to a provisional spare drive. The populated provisional spare drive replaces the failed drive.
    Type: Grant
    Filed: January 27, 2021
    Date of Patent: October 12, 2021
    Assignee: Dell Products L.P.
    Inventors: Kuolin Hua, Kunxiu Gao, Michael Scharland
  • Patent number: 11106360
    Abstract: A method, computer program product, and computer system for receiving, by a computing device, an I/O request. It may be identified whether the I/O request is eligible for handling via a first path without also requiring handling via a second path. If the I/O request is eligible, the I/O request may be processed via the first path on a host I/O stack without processing the I/O request via the second path on a storage array I/O stack. If the I/O request is ineligible, the I/O request may be processed via the first path on the host I/O stack and via the second path on the storage array I/O stack.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: August 31, 2021
    Assignee: EMC IP Holding Company, LLC
    Inventors: Adnan Sahin, Michael Scharland, Robert DeCrescenzo, Steven T. McClure, James Marriott Guyer, Jason J. Duquette
  • Patent number: 10496282
    Abstract: Storage group performance targets are achieved by managing resources using discrete techniques that are selected based on learned cost-benefit rank. The techniques include delaying start of IOs based on storage group association, making a storage group active or passive on a port, and biasing front end cores. A performance goal may be assigned to each storage group based on volume of IOs and the difference between an observed response time and a target response time. A decision tree is used to select a correction technique which is biased based on the cost of deployment. The decision tree maintains an average benefit of each technique and over time with rankings based on maximizing cost-benefit.
    Type: Grant
    Filed: March 16, 2016
    Date of Patent: December 3, 2019
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Owen Martin, Arieh Don, Michael Scharland
  • Patent number: 10019359
    Abstract: Described are techniques for processing I/O operations. A read operation is received to read first data from a first location. It is determined whether the read operation is a read miss and whether non-location metadata for the first location is stored in cache. Responsive to determining that the read operation is a read miss and that the non-location metadata for the first location is not stored in cache, first processing is performed that includes issuing concurrently a first read request to read the first data from physical storage and a second read request to read the non-location metadata for the first location from physical storage.
    Type: Grant
    Filed: May 9, 2017
    Date of Patent: July 10, 2018
    Assignee: EMC IP Holding Company LLC
    Inventors: Andrew Chanler, Michael Scharland, Gabriel BenHanokh, Arieh Don
  • Patent number: 9678869
    Abstract: Described are techniques for processing I/O operations. A read operation is received to read first data from a first location. It is determined whether the read operation is a read miss and whether non-location metadata for the first location is stored in cache. Responsive to determining that the read operation is a read miss and that the non-location metadata for the first location is not stored in cache, first processing is performed that includes issuing concurrently a first read request to read the first data from physical storage and a second read request to read the non-location metadata for the first location from physical storage.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: June 13, 2017
    Assignee: EMC IP Holding Company LLC
    Inventors: Andrew Chanler, Michael Scharland, Gabriel BenHanokh, Arieh Don