Patents by Inventor Kaustubh Sahasrabudhe

Kaustubh Sahasrabudhe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11809315
    Abstract: Worker threads allocate at least some recycled cache slots of a local portion of a shared memory to the compute node to which the memory portion is local. More specifically, the recycled cache slots are allocated prior to receipt of the IO that the recycled cache slot will be used to service. The allocated recycled cache slots are added to primary queues of each compute node. If a primary queue is full then the worker thread adds the recycled cache slot, unallocated, to a secondary queue. Cache slots in the secondary queue can be claimed by any compute node associated with the shared memory. Cache slots in the primary queue can be used by the local compute node without sending test and set messages via the fabric that interconnects the compute nodes, thereby improving IO latency.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: November 7, 2023
    Assignee: Dell Products L.P.
    Inventors: Steve Ivester, Kaustubh Sahasrabudhe
  • Publication number: 20230297260
    Abstract: A data storage node includes a plurality of compute nodes that allocate portions of local memory to a shared cache. The shared cache is configured with mirrored and non-mirrored segments that are sized as a function of the percentage of write IOs and read 10s in a historical traffic workload profile specific to an organization or storage node. The mirrored and non-mirrored segments are separately configured with pools of data slots. Within each segment, each pool is associated with same-size data slots that differ in size relative to the data slots of other pools. The sizes of the pools in the mirrored segment are set based on write IO size distribution in the historical traffic workload profile. The sizes of the pools in the non-mirrored segment are set based on read IO size distribution in the historical traffic workload profile.
    Type: Application
    Filed: March 18, 2022
    Publication date: September 21, 2023
    Applicant: Dell Products L.P.
    Inventors: Ramesh Doddaiah, Malak Alshawabkeh, Kaustubh Sahasrabudhe
  • Patent number: 11740816
    Abstract: A data storage node includes a plurality of compute nodes that allocate portions of local memory to a shared cache. The shared cache is configured with mirrored and non- mirrored segments that are sized as a function of the percentage of write IOs and read IOs in a historical traffic workload profile specific to an organization or storage node. The mirrored and non-mirrored segments are separately configured with pools of data slots. Within each segment, each pool is associated with same-size data slots that differ in size relative to the data slots of other pools. The sizes of the pools in the mirrored segment are set based on write IO size distribution in the historical traffic workload profile. The sizes of the pools in the non-mirrored segment are set based on read IO size distribution in the historical traffic workload profile.
    Type: Grant
    Filed: March 18, 2022
    Date of Patent: August 29, 2023
    Assignee: Dell Products L.P.
    Inventors: Ramesh Doddaiah, Malak Alshawabkeh, Kaustubh Sahasrabudhe
  • Patent number: 11599461
    Abstract: Aspects of the present disclosure relate to data cache management. In embodiments, a storage array's memory is provisioned with cache memory, wherein the cache memory includes one or more sets of distinctly sized cache slots. Additionally, a logical storage volume (LSV) is established with at least one logical block address (LBA) group. Further, at least one of the LSV's LBA groups is associated with two or more distinctly sized cache slots based on an input/output (IO) workload received by the storage array.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: March 7, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Michael Scharland, Mark Halstead, Rong Yu, Peng Wu, Benjamin Yoder, Kaustubh Sahasrabudhe
  • Publication number: 20230023314
    Abstract: Aspects of the present disclosure relate to data cache management. In embodiments, a storage array's memory is provisioned with cache memory, wherein the cache memory includes one or more sets of distinctly sized cache slots. Additionally, a logical storage volume (LSV) is established with at least one logical block address (LBA) group. Further, at least one of the LSV's LBA groups is associated with two or more distinctly sized cache slots based on an input/output (IO) workload received by the storage array.
    Type: Application
    Filed: July 26, 2021
    Publication date: January 26, 2023
    Applicant: EMC IP Holding Company LLC
    Inventors: Michael Scharland, Mark Halstead, Rong Yu, Peng Wu, Benjamin Yoder, Kaustubh Sahasrabudhe
  • Publication number: 20220300420
    Abstract: Worker threads allocate at least some recycled cache slots of a local portion of a shared memory to the compute node to which the memory portion is local. More specifically, the recycled cache slots are allocated prior to receipt of the IO that the recycled cache slot will be used to service. The allocated recycled cache slots are added to primary queues of each compute node. If a primary queue is full then the worker thread adds the recycled cache slot, unallocated, to a secondary queue. Cache slots in the secondary queue can be claimed by any compute node associated with the shared memory. Cache slots in the primary queue can be used by the local compute node without sending test and set messages via the fabric that interconnects the compute nodes, thereby improving IO latency.
    Type: Application
    Filed: March 17, 2021
    Publication date: September 22, 2022
    Applicant: EMC IP HOLDING COMPANY LLC
    Inventors: Steve Ivester, Kaustubh Sahasrabudhe
  • Publication number: 20220121571
    Abstract: Remote cache slots are donated in a storage array without requiring a cache slot starved compute node to search for candidates in remote portions of a shared memory. One or more donor compute nodes create donor cache slots that are reserved for donation. The cache slot starved compute node broadcasts a message to the donor compute nodes indicating a need for donor cache slots. The donor compute nodes provide donor cache slots to the cache slot starved compute node in response to the message. The message may be broadcast by updating a mask of compute node operational status in the shared memory. The donor cache slots may be provided by providing pointers to the donor cache slots.
    Type: Application
    Filed: October 20, 2020
    Publication date: April 21, 2022
    Applicant: EMC IP HOLDING COMPANY LLC
    Inventors: John Creed, Steve Ivester, John Krasner, Kaustubh Sahasrabudhe
  • Patent number: 11080190
    Abstract: Embodiments of the present disclosure relate to an apparatus comprising a memory and at least one processor. The at least one processor is configured to monitor one or more processing threads of a storage device. Each of the one or more processing threads includes two or more cache states. The at least one processor also updates one or more data structures to indicate a subject cache state of each of the one or more processing threads and detect an event that disrupts at least one of the one or more processing threads. Further, the processor determines a cache state of the at least one of the one or more processing threads contemporaneous to the disruption event using the one or more data structures and performs a recovery process for the disrupted at least one of the one or more processing threads.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: August 3, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Kaustubh Sahasrabudhe, Steven Ivester
  • Patent number: 11074113
    Abstract: A storage system includes at least two independent storage engines interconnected by a fabric, each storage engine having two compute nodes. A shared global memory is implemented using cache slots of each of the compute nodes. Memory access operations to the slots of shared global memory are managed by a fabric adapter to guarantee that the operations are atomic. To enable local cache operations to be managed independent of the fabric adapter, a cache metadata data structure includes a global flag bit for each cache slot, that is used to designate the cache slot as globally available or temporarily reserved for local IO processing. The cache metadata data structure also includes a mutex (Peterson lock) for each cache slot to enforce a mutual exclusion concurrency control policy on the cache slot between the two compute nodes of the storage engine when the cache slot is used for local IO processing.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: July 27, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Steven Ivester, Kaustubh Sahasrabudhe
  • Publication number: 20210011850
    Abstract: Embodiments of the present disclosure relate to an apparatus comprising a memory and at least one processor. The at least one processor is configured to monitor one or more processing threads of a storage device. Each of the one or more processing threads includes two or more cache states. The at least one processor also updates one or more data structures to indicate a subject cache state of each of the one or more processing threads and detect an event that disrupts at least one of the one or more processing threads. Further, the processor determines a cache state of the at least one of the one or more processing threads contemporaneous to the disruption event using the one or more data structures and performs a recovery process for the disrupted at least one of the one or more processing threads.
    Type: Application
    Filed: July 10, 2019
    Publication date: January 14, 2021
    Applicant: EMC IP Holding Company LLC
    Inventors: Kaustubh Sahasrabudhe, Steven Ivester