Patents by Inventor Ramesh Doddaiah
Ramesh Doddaiah has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Method and apparatus for predicting and exploiting aperiodic backup time windows on a storage system
Patent number: 11782798Abstract: A multivariate time series model such as a Vector Auto Regression (VAR) model is built using fabric utilization, disk utilization, and CPU utilization time series data. The VAR model leverages interdependencies between multiple time-dependent variables to predict the start and length of an aperiodic backup time window, and to cause backup operations to occur during the aperiodic backup time window to thereby exploit the aperiodic backup time window for use in connection with backup operations. By automatically starting backup operations during predicted aperiodic backup time windows where the CPU, disk, and fabric utilization values are predicted to be low, it is possible to implement backup operations during time windows where the backup operations are less likely to interfere with primary application workloads, or system application workloads that need to be implemented to maintain optimal operation of the storage system.Type: GrantFiled: February 11, 2022Date of Patent: October 10, 2023Assignee: Dell Products, L.P.Inventors: Ramesh Doddaiah, Malak Alshawabkeh -
Publication number: 20230297260Abstract: A data storage node includes a plurality of compute nodes that allocate portions of local memory to a shared cache. The shared cache is configured with mirrored and non-mirrored segments that are sized as a function of the percentage of write IOs and read 10s in a historical traffic workload profile specific to an organization or storage node. The mirrored and non-mirrored segments are separately configured with pools of data slots. Within each segment, each pool is associated with same-size data slots that differ in size relative to the data slots of other pools. The sizes of the pools in the mirrored segment are set based on write IO size distribution in the historical traffic workload profile. The sizes of the pools in the non-mirrored segment are set based on read IO size distribution in the historical traffic workload profile.Type: ApplicationFiled: March 18, 2022Publication date: September 21, 2023Applicant: Dell Products L.P.Inventors: Ramesh Doddaiah, Malak Alshawabkeh, Kaustubh Sahasrabudhe
-
Patent number: 11740816Abstract: A data storage node includes a plurality of compute nodes that allocate portions of local memory to a shared cache. The shared cache is configured with mirrored and non- mirrored segments that are sized as a function of the percentage of write IOs and read IOs in a historical traffic workload profile specific to an organization or storage node. The mirrored and non-mirrored segments are separately configured with pools of data slots. Within each segment, each pool is associated with same-size data slots that differ in size relative to the data slots of other pools. The sizes of the pools in the mirrored segment are set based on write IO size distribution in the historical traffic workload profile. The sizes of the pools in the non-mirrored segment are set based on read IO size distribution in the historical traffic workload profile.Type: GrantFiled: March 18, 2022Date of Patent: August 29, 2023Assignee: Dell Products L.P.Inventors: Ramesh Doddaiah, Malak Alshawabkeh, Kaustubh Sahasrabudhe
-
Method and Apparatus for Predicting and Exploiting Aperiodic Backup Time Windows on a Storage System
Publication number: 20230259429Abstract: A multivariate time series model such as a Vector Auto Regression (VAR) model is built using fabric utilization, disk utilization, and CPU utilization time series data. The VAR model leverages interdependencies between multiple time-dependent variables to predict the start and length of an aperiodic backup time window, and to cause backup operations to occur during the aperiodic backup time window to thereby exploit the aperiodic backup time window for use in connection with backup operations. By automatically starting backup operations during predicted aperiodic backup time windows where the CPU, disk, and fabric utilization values are predicted to be low, it is possible to implement backup operations during time windows where the backup operations are less likely to interfere with primary application workloads, or system application workloads that need to be implemented to maintain optimal operation of the storage system.Type: ApplicationFiled: February 11, 2022Publication date: August 17, 2023Inventors: Ramesh Doddaiah, Malak Alshawabkeh -
Publication number: 20230236885Abstract: Aspects of the present disclosure relate to tuning resource allocations based on a storage array feature's impact on the array's global performance. In embodiments, one or more input/output (IO) features used by a storage array to process one or more IO workloads are determined. Additionally, each IO feature's impact for processing the IO workload within a threshold performance requirement can be determined. Further, at least one IO feature's resource allocation can be tuned based on its IO workload processing impact.Type: ApplicationFiled: January 21, 2022Publication date: July 27, 2023Applicant: Dell Products L.P.Inventor: Ramesh Doddaiah
-
Publication number: 20230229329Abstract: Aspects of the present disclosure relate to data deduplication (dedup) techniques for storage arrays. In embodiments, a sequence of input/output (IO) operations in an IO stream received from one or more host devices by a storage array are identified. Additionally, a determination is made as to whether previously received IO operations match the identified IO based on an IO rolling offsets empirical distribution model. Further, one or more data deduplication (dedup) techniques are performed on the matching IO sequence based on a comparison of a source compression technique and a target compression technique related to the identified IO sequence.Type: ApplicationFiled: January 20, 2022Publication date: July 20, 2023Applicant: Dell Products L.P.Inventors: Ramesh Doddaiah, Peng Wu, Jingtong Liu
-
Patent number: 11698744Abstract: Aspects of the present disclosure relate to data deduplication (dedup) techniques for storage arrays. At least one input/output (IO) operations in an IO workload received by a storage array can be identified. Each of the IOs can relate to a data track of the storage array. a probability of the at least one IO being similar to a previous stored IO can be determined. A data deduplication (dedup) operation can be performed on the at least one IO based on the probability. The probability can be less than one hundred percent (100%).Type: GrantFiled: October 26, 2020Date of Patent: July 11, 2023Assignee: EMC IP Holding Company LLCInventor: Ramesh Doddaiah
-
Patent number: 11687243Abstract: Aspects of the present disclosure relate to reducing the latency of data deduplication. In embodiments, an input/output (IO) workload received by a storage array is monitored. Further, at least one IO write operation in the IO workload is identified. A space-efficient probabilistic data structure is used to determine if a director board is associated with the IO write. Additionally, the IO write operation is processed based on the determination.Type: GrantFiled: July 22, 2021Date of Patent: June 27, 2023Assignee: EMC IP Holding Company LLCInventors: Venkata Ippatapu, Ramesh Doddaiah, Sweetesh Singh
-
Patent number: 11645012Abstract: A Random Read Miss (RRM) distribution process monitors execution parameters of first, second, and third emulations of a storage engine, and distributes newly received read operations between the emulations. The RRM distribution process assigns newly received read operations to the first emulation, unless the CPU thread usage of the first emulation or the response time of the first emulation meet a first set of criteria. The RRM distribution process secondarily assigns newly received read operations to the second emulation, unless the CPU thread usage of the second emulation or the response time of the second emulation meet a second set of criteria. The RRM distribution process assigns all other newly received newly received read operations, that are not assigned to the first emulation or to the second emulation, to the third emulation. Distribution of read IOs between the emulations enables the storage engine to increase IOPs while minimizing response time.Type: GrantFiled: January 18, 2022Date of Patent: May 9, 2023Assignee: Dell Products, L.P.Inventors: Ramesh Doddaiah, Peng Wu, Rong Yu, Earl Medeiros, Peng Yin
-
Patent number: 11625327Abstract: Embodiments of the present disclosure relate to cache memory management. Based on anticipated input/output (I/O) workloads, at least one or more of: sizes of one or more mirrored and un-mirrored caches of global memory and their respective cache slot pools are dynamically balanced. Each of the mirrored/unmirrored caches can be segmented into one or more cache pools, each having slots of a distinct size. Cache pool can be assigned an amount of the one or more cache slots of the distinct size based on the anticipated I/O workloads. Cache pools can be further assigned the amount of distinctly sized cache slots based on expected service levels (SLs) of a customer. Cache pools can also be assigned the amount of the distinctly sized cache slots based on one or more of predicted I/O request sizes and predicted frequencies of different I/O request sizes of the anticipated I/O workloads.Type: GrantFiled: December 10, 2019Date of Patent: April 11, 2023Assignee: EMC IP Holding Company LLCInventors: John Krasner, Ramesh Doddaiah
-
Patent number: 11599441Abstract: Embodiments of the present disclosure relate to throttling processing threads of a storage device. One or more input/output (I/O) workloads of a storage device can be monitored. One or more resources consumed by each thread of each storage device component to process each operation included in a workload can be analyzed. Based on the analysis, consumption of each resource consumed by each thread can be controlled.Type: GrantFiled: April 2, 2020Date of Patent: March 7, 2023Assignee: EMC IP Holding Company LLCInventors: Ramesh Doddaiah, Malak Alshawabkeh, Mohammed Asher, Rong Yu
-
Patent number: 11593267Abstract: Aspects of the present disclosure relate to asynchronous memory management. In embodiments, an input/output (IO) workload is received at a storage array. Further, one or more read-miss events corresponding to the IO workload are identified. Additionally, at least one of the storage array's cache slots is bound to a track identifier (TID) corresponding to the read-miss events based on one or more of the read-miss events' two-dimensional metrics.Type: GrantFiled: October 28, 2021Date of Patent: February 28, 2023Assignee: EMC IP Holding Company LLCInventors: Ramesh Doddaiah, Malak Alshawabkeh, Rong Yu, Peng Wu
-
Publication number: 20230027284Abstract: Aspects of the present disclosure relate to reducing the latency of data deduplication. In embodiments, an input/output (IO) workload received by a storage array is monitored. Further, at least one IO write operation in the IO workload is identified. A space-efficient probabilistic data structure is used to determine if a director board is associated with the IO write. Additionally, the IO write operation is processed based on the determination.Type: ApplicationFiled: July 22, 2021Publication date: January 26, 2023Applicant: EMC IP Holding Company LLCInventors: Venkata Ippatapu, Ramesh Doddaiah, Sweetesh Singh
-
Patent number: 11513849Abstract: A scheduler for a storage node uses multi-dimensional weighted resource cost matrices to schedule processing of IOs. A separate matrix is created for each computing node of the storage node via machine learning or regression analysis. Each matrix includes distinct dimensions for each emulation of the computing node for which the matrix is created. Each dimension includes modeled costs in terms of amounts of resources of various types required to process an IO of various IO types. An IO received from a host by a computing node is not scheduled for processing by that computing node unless enough resources are available at each emulation of that computing node. If enough resources are unavailable at an emulation, then the IO is forwarded to a different computing node that has enough resources at each of its emulations. A weighted resource cost for processing the IO is calculated and used to determine scheduling priority.Type: GrantFiled: July 20, 2021Date of Patent: November 29, 2022Assignee: Dell Products L.P.Inventor: Ramesh Doddaiah
-
Patent number: 11494283Abstract: A storage system has a QOS recommendation engine that monitors storage system operational parameters and generates recommended changes to host QOS metrics (throughput, bandwidth, and response time requirements) based on differences between the host QOS metrics and storage system operational parameters. The recommended host QOS metrics may be automatically implemented to adjust the host QOS metrics. By reducing host QOS metrics during times where the storage system is experiencing high volumes of workload, it is possible to throttle workload at the host computer rather than requiring the storage system to expend processing resources associated with queueing the workload prior to processing. This can enable the overall throughput of the storage system to increase. When the workload on the storage system is reduced, updated recommended host QOS metrics are provided to enable the host QOS metrics to increase. Historical analysis is also used to generate recommended host QOS metrics.Type: GrantFiled: May 4, 2020Date of Patent: November 8, 2022Assignee: Dell Products, L.P.Inventor: Ramesh Doddaiah
-
Publication number: 20220327246Abstract: An aspect of the present disclosure relates to one or more data decryption techniques. In embodiments, an input/output operation (IO) stream including one or more encrypted IOs is received by a storage array. Each encrypted IO is assigned an encryption classification. Further, each encrypted IO is processed based on its assigned encryption classification.Type: ApplicationFiled: April 13, 2021Publication date: October 13, 2022Applicant: EMC IP Holding Company LLCInventors: Ramesh Doddaiah, Malak Alshawabkeh
-
Publication number: 20220326865Abstract: Aspects of the present disclosure relate to data deduplication (dedupe). In embodiments, an input/output operation (IO) stream is received by a storage array. In addition, a received IO sequence in the IO stream that matches a previously received IO sequence is identified. Further, a data deduplication (dedupe) technique is performed based on a selected data dedupe policy. The data dedupe policy can be selected based on a comparison of service quality (QoS) related to the received IO sequence and a QoS related to the previously received IO sequence.Type: ApplicationFiled: April 12, 2021Publication date: October 13, 2022Applicant: EMC IP Holding Company LLCInventors: Ramesh Doddaiah, Malak Alshawabkeh
-
Patent number: 11467906Abstract: An apparatus comprises a storage system comprising at least one processing device and a plurality of storage devices. The at least one processing device is configured to obtain a given input-output operation from a host device and to determine that the given input-output operation comprises an indicator having a particular value. The particular value indicates that the given input-output operation is a repeat of a prior input-output operation. The at least one processing device is further configured to rebuild at least one resource of the storage system that is designated for servicing the given input-output operation based at least in part on the determination that the given input-output operation comprises the indicator having the particular value.Type: GrantFiled: August 2, 2019Date of Patent: October 11, 2022Assignee: EMC IP Holding Company LLCInventors: Ramesh Doddaiah, Bernard A. Mulligan, III
-
Patent number: 11429294Abstract: In a data storage system in which a full-size allocation unit is used for storage of uncompressed data, an optimal reduced size allocation unit is selected for storage of compressed data. Changes in the compressed size of at least one full-size allocation unit of representative data are monitored over time. The representative data may be selected based on write frequency, relocation frequency, or both. Compression size values are counted and weighted to calculate the optimal reduced allocation unit size. The optimal reduced size allocation unit is used for storage of compressed data. A full-size allocation unit of data that cannot be accommodated by a reduced size allocation unit when compressed is stored uncompressed.Type: GrantFiled: May 22, 2020Date of Patent: August 30, 2022Assignee: Dell Products L.P.Inventors: Ramesh Doddaiah, Anoop Raghunathan
-
Patent number: 11409667Abstract: A deduplication engine maintains a hash table containing hash values of tracks of data stored on managed drives of a storage system. The deduplication engine keeps track of how frequently the tracks are accessed by the deduplication engine using an exponential moving average for each track. Target tracks which are frequently accessed by the deduplication engine are cached in local memory, so that required byte-by-byte comparisons between the target track and write data may be performed locally rather than requiring the target track to be read from managed drives. The deduplication engine implements a Least Recently Used (LRU) cache data structure in local memory to manage locally cached tracks of data. If a track is to be removed from local memory, a final validation of the target track is implemented on the version stored in managed resources before evicting the track from the LRU cache.Type: GrantFiled: April 15, 2021Date of Patent: August 9, 2022Assignee: Dell Products, L.P.Inventors: Venkata Ippatapu, Ramesh Doddaiah