Patents by Inventor Ramesh Doddaiah
Ramesh Doddaiah has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12380065Abstract: For a source device with an activated snapshot, time series-based prediction of source device write IO operations is implemented, on a per-extent basis, to predict a subset of the source device extents that are likely to be hot (receive write IO operations) during an upcoming time window. Snapshot tracks corresponding to tracks of the predicted hot extents are pre-deduplicated, to accelerate write IO operations on the source device. In instances where the time series-based prediction correctly predicts write IO operations on tracks of an extent, and the tracks of the extent of the source device are pre-deduplicated on the snapshot, it is possible to implement the write IO operations as a redirect on write operation, without first replicating the original track of source data for use by the snapshot. Write IO operations on tracks that are not pre-deduplicated are implemented as copy on write operations.Type: GrantFiled: June 2, 2024Date of Patent: August 5, 2025Assignee: Dell Products, L.P.Inventors: Ramesh Doddaiah, Sandeep Chandrashekhara, Mohammed Asher
-
Patent number: 12373334Abstract: In an active-active replication environment in which sequential read IOs are distributed across storage array ports and replicas, a first read data size controls selection of a different path to a currently targeted replica to distribute sequential reads across storage system ports and a second read data size controls when a different replica is selected to distribute the sequential reads across storage systems. The storage arrays that maintain the replicas are phase-coordinated to pre-fetch only sequential data that will be accessed in the near future from their local replica. For example, a first storage array prefetches sequential data from a first replica in chunks corresponding to the first data size until sequential data corresponding to the second data size has been prefetched, skips sequential addresses corresponding to the second data size, and prefetches sequential data corresponding to the first data size that is next in sequence relative to the skipped sequential addresses.Type: GrantFiled: April 24, 2024Date of Patent: July 29, 2025Assignee: Dell Products L.P.Inventors: Arieh Don, G Vinay Rao, Sanjib Mallick, Ramesh Doddaiah
-
Patent number: 12366985Abstract: A storage system is configured with pools of processor cores. Each pool corresponds uniquely to one of the supported service levels of the storage system. In a dynamic time series forecast adjustment mode, processor cores within each pool run at a clock speed that is statically defined for the service level corresponding to the respective pool and core affiliations with pools are dynamically adjusted based on modelled data access latency. During a scheduled event, an event guided core matrix profile overrides the time series forecast adjusted core matrix profile. The event guided core matrix profile includes core clock speeds and pool affiliations, thereby enabling rapid reconfiguration.Type: GrantFiled: May 10, 2024Date of Patent: July 22, 2025Assignee: Dell Products L.P.Inventors: John Creed, Ramesh Doddaiah, Evan Barry
-
Patent number: 12360674Abstract: One or more aspects of the present disclosure relate to distributing write destaging amongst a storage array's boards. In embodiments, an input/output (IO) workload is received at a storage array. In addition, a write IO request in the IO workload is directed to a target board of a plurality of boards of the storage array based on one or more performance-related hyperparameters of each board. Further, write data of the write IO request to a storage device is destaged by the target board.Type: GrantFiled: February 1, 2023Date of Patent: July 15, 2025Assignee: Dell Products L.P.Inventors: Lixin Pang, Rong Yu, Ramesh Doddaiah, Shao Kuang Hu
-
Publication number: 20250224994Abstract: Thread CPU cycle utilization patterns are used to analyze storage array performance problems. Per-thread CPU cycle utilization statistics are monitored, and code path utilizations are counted. Time-series clustering is used to identify code path activity clusters and thread clusters having similar CPU cycle utilization patterns. The most active code paths are selected from each code path activity cluster. The thread clusters that are most highly correlated with the selected code paths are selected and ranked based on CPU cycle utilization.Type: ApplicationFiled: January 8, 2024Publication date: July 10, 2025Applicant: Dell Products L.P.Inventors: Pankaj Soni, Ramesh Doddaiah, Malak Alshawabkeh, Eddie Tran, Garima Jain
-
Patent number: 12353733Abstract: Autonomous power control is provided on a storage system to enable service level maximum response time controls to be achieved, while also minimizing power consumption, by enabling service level minimum response times to be specified and enforced. A workload/CPU clock speed model is created for the storage system correlating a maximum number of IOPS that the storage system can process for different CPU clock speeds for each workload type. A service level agreement specifies a maximum storage system response time. A storage system minimum response time is also specified, that is used to identify a target CPU clock speed for the workload type being provided by a host. The CPU clock speed is lowered to reduce energy consumption by the storage system toward the target CPU clock speed, and storage system performance is monitored to ensure that the storage system complies with the storage system maximum response time.Type: GrantFiled: November 6, 2023Date of Patent: July 8, 2025Assignee: Dell Products, L.P.Inventors: Owen Martin, Benjamin A. F. Randolph, Ramesh Doddaiah
-
Publication number: 20250217059Abstract: A shared global memory segmentation conversion process optimizes conversion of shared Global Memory (GM) in connection with changes to a global memory segmentation policy. When a policy region is to be reduced in size, the GM banks of the policy region are ranked and one or more of the lowest performing GM banks in the policy region are selected to be moved to the other policy region. Similarly, within each policy region, IO pools, used to create slots of particular sizes, that are being reduced in size are determined, and the GM banks of the IO pools are ranked. One or more of the lowest performing GM banks are selected and moved to an IO pool that is being increased in size. This process iterates, re-ranking the remaining GM banks in the policy region/IO pools, until the correct allocation of GM policy regions and IO pools is achieved.Type: ApplicationFiled: January 3, 2024Publication date: July 3, 2025Inventors: Kaustubh Sahasrabudhe, Ramesh Doddaiah, Malak Alshawabkeh
-
Publication number: 20250217294Abstract: Metadata page prefetch processing for incoming IO operations is provided to increase storage system performance by reducing the frequency of metadata page miss events during IO processing. When an IO is received at a storage system, the IO is placed in an IO queue to be scheduled for processing by an IO processing thread. A metadata page prefetch thread reads the LBA address of the IO and determines whether all of the metadata page(s) that will be needed by the IO processing thread are contained in IO thread metadata resources. In response to a determination that one or more of the required metadata pages are not contained in IO thread metadata resources, the metadata page prefetch thread instructs a MDP thread to move the required metadata page(s) from metadata storage to IO thread metadata resources. The IO processing thread then implements the IO operation using the prefetched metadata.Type: ApplicationFiled: January 1, 2024Publication date: July 3, 2025Inventors: Ramesh Doddaiah, Sandeep Chandrashekhara, Mohammed Aamir Vt, Mohammed Asher
-
Publication number: 20250217276Abstract: Two storage arrays implement a Remote Data Replication (RDR) facility between a primary storage array and a backup storage array. The primary storage array collects global memory usage statistics by host processes on the primary array and global memory usage by the RDR process on the primary array. The backup storage array collects global memory usage statistics by host processes on the backup array and global memory usage by the RDR process on the backup array. The primary storage array transmits information about the global memory usage by the RDR process on the primary array to the backup array. The backup storage array transmits information about the global memory usage by the RDR process on the backup array to the primary array. Both arrays use global memory usage statistics by their respective local host process and the RDR process on both arrays, to locally set global memory segmentation policies.Type: ApplicationFiled: January 1, 2024Publication date: July 3, 2025Inventors: Malak Alshawabkeh, Ramesh Doddaiah, Kaustubh Sahasrabudhe
-
Patent number: 12346258Abstract: Metadata page prefetch processing for incoming IO operations is provided to increase storage system performance by reducing the frequency of metadata page miss events during IO processing. When an IO is received at a storage system, the IO is placed in an IO queue to be scheduled for processing by an IO processing thread. A metadata page prefetch thread reads the LBA address of the IO and determines whether all of the metadata page(s) that will be needed by the IO processing thread are contained in IO thread metadata resources. In response to a determination that one or more of the required metadata pages are not contained in IO thread metadata resources, the metadata page prefetch thread instructs a MDP thread to move the required metadata page(s) from metadata storage to IO thread metadata resources. The IO processing thread then implements the IO operation using the prefetched metadata.Type: GrantFiled: January 1, 2024Date of Patent: July 1, 2025Assignee: Dell Products, L.P.Inventors: Ramesh Doddaiah, Sandeep Chandrashekhara, Mohammed Aamir Vt, Mohammed Asher
-
Patent number: 12346435Abstract: Host IO write operations on a given storage group are grouped into cycles of a write destage pipeline for the storage group. Host writes are collected during a capture cycle, while host writes from a previous capture cycle are analyzed and destaged during an apply cycle. After the previous apply cycle has completed, a cycle switch occurs and the current capture cycle becomes the new apply cycle. Anomaly detection is implemented before any IO write operations are destaged during the apply cycle, and if an anomaly is detected the write destage is stopped. The entire apply cycle is either destaged to disk, or is discarded, depending on whether the anomaly is confirmed. By maintaining the order of writes across different cycles, which act as consistency points, it is possible to implement ransomware protection in real time on host write operations, while using ordered write destage to maintain data consistency.Type: GrantFiled: April 5, 2023Date of Patent: July 1, 2025Assignee: Dell Products, L.P.Inventors: Sandeep Chandrashekhara, Ramesh Doddaiah, Mohammed Asher Vt
-
Publication number: 20250208940Abstract: Systems and methods for time-series based machine learning anomaly detection and prevention are described. In an illustrative, non-limiting embodiment, an Information Handling System (IHS) may include: a processor; and a memory coupled to the processor, where the memory includes program instructions store thereon that, upon execution by the processor, cause the IHS to: obtain communication data associated with the IHS for a plurality of time windows, including a particular time window, and previous time windows before the particular time window; determine, using a machine learning model, that the communication data for the particular time window includes an anomaly; and based on the determination, perform one or more actions. In some embodiments, the program instructions further cause the IHS to: based on the communication data, determine time-series data for a plurality of attributes of the communication data; and determine that an attribute includes an outlier in the particular time window.Type: ApplicationFiled: December 22, 2023Publication date: June 26, 2025Applicant: Dell Products, L.P.Inventors: Suresh K. Krishnan, Ramesh Doddaiah, Mikhail Salnikov
-
Publication number: 20250200132Abstract: An information handling system may include at least one processor; a volatile memory; and a non-volatile memory. The information handling system may be configured to: determine pairwise correlations between data in a plurality of regions of the non-volatile memory; and destage data from the volatile memory to the non-volatile memory in accordance with the pairwise correlations, such that first data is destaged with second data having a high degree of correlation to the first data.Type: ApplicationFiled: December 13, 2023Publication date: June 19, 2025Applicant: Dell Products L.P.Inventors: Ramesh DODDAIAH, Daniel L. HAMLIN, Ganesh KRISHNAMURTHY, Malathi RAMAKRISHNAN
-
Publication number: 20250199878Abstract: A storage array engine has two single-board storage directors with CPU complexes and PCIe switches that are interconnected by a fabric-less PCIe NTB. IO response times of the storage directors are modeled, e.g., as a function of controller memory interface bandwidth utilization, switch utilization, fall-through time of a non-mirrored segment of the volatile memory, central processing unit complex utilization, number of available data slots in the non-mirrored segment of the volatile memory, and average depth of all IO-related queues. Responsive to receipt of an IO, a data slot in either local or remote storage director memory is allocated based on the difference between computed IO response times of the storage directors. The fabric-less link is used to service IOs using remote memory, thereby mitigating additional loading of the local CPU complex.Type: ApplicationFiled: December 14, 2023Publication date: June 19, 2025Applicant: Dell Products L.P.Inventors: Rong Yu, Earl Medeiros, Thomas Rogers, Ramesh Doddaiah
-
Patent number: 12332971Abstract: Streaming machine telemetry (SMT) event counters are placed in critical code paths of software executing on a storage system. Each monitoring interval the values of the SMT counters are reported. When a critical error occurs on the storage system, a time series set of SMT counters from a set of previous monitoring intervals is labeled with the error type and used as a training example for a learning process. The learning process is trained to learn to learn recursions between time series sets of SMT counter values and labeled error types. Once trained, a checkpoint of the learning model is deployed as an inference model and used to predict the likely occurrence of errors before the errors occur. Predicted errors are logged into a proactive service request queue, and feedback related to the predicted errors are used as feedback to continue training the learning process.Type: GrantFiled: October 13, 2022Date of Patent: June 17, 2025Assignee: Dell Products, L.P.Inventors: Owen Martin, Ramesh Doddaiah
-
Publication number: 20250193129Abstract: An information handling system may include at least one processor and a network interface adapter. The information handling system may be configured to: couple to a plurality of client systems via the network interface adapter; implement a quality of service (Qos) policy for each client system; adjust the Qos policies based on predictions regarding resource utilization for resources of the information handling system and input/output (I/O) demands associated with each client system during specified time windows; and service requests from the client systems in accordance with the adjusted Qos policies.Type: ApplicationFiled: December 12, 2023Publication date: June 12, 2025Applicant: Dell Products L.P.Inventors: Ramesh DODDAIAH, Daniel L. HAMLIN, Anup KESHWANI, Malathi RAMAKRISHNAN
-
Publication number: 20250189589Abstract: An information handling system may include at least one processor and a network interface adapter. The information handling system may be configured to: receive, via the network interface adapter, information relating to a plurality of batteries, wherein the information includes environmental data and performance data; train a machine learning model based on the received information; and based on the machine learning model, perform a predictive analysis on at least one other battery to determine a likelihood of failure.Type: ApplicationFiled: December 11, 2023Publication date: June 12, 2025Applicant: Dell Products L.P.Inventors: Ramesh DODDAIAH, Daniel L. HAMLIN, Manikandan R, Malathi RAMAKRISHNAN
-
Publication number: 20250190852Abstract: An information handling system may include at least one processor and a memory. The information handling system may be configured to: receive data during each of a plurality of time windows, wherein the data is associated with a machine learning model; determine that particular data from a particular time window is associated with a statistical anomaly; and in response to the determining, prevent the particular data from the particular time window from being used to update the machine learning model.Type: ApplicationFiled: December 11, 2023Publication date: June 12, 2025Applicant: Dell Products L.P.Inventors: Ramesh DODDAIAH, Daniel L. HAMLIN, Malathi RAMAKRISHNAN, Vivekanandh Narayanasamy RAJAGOPALAN
-
Publication number: 20250190446Abstract: Storage arrays include inline compression hardware that can simultaneously implement multiple compression levels at line rate. For each compression level, compression efficiency is inversely related to processing efficiency. A compression level is dynamically selected for segments of replication data based on one or more of compression hardware utilization, network utilization, network latency, data compressibility, and IO size. The compression hardware utilization may be maintained at or near full utilization. Extents of data of a replica are analyzed based on compressibility. Replication data that resides in an extent of relatively incompressible data, or that is associated with a relatively small IO, may be compressed using a compression level characterized by greater processing efficiency.Type: ApplicationFiled: December 11, 2023Publication date: June 12, 2025Applicant: Dell Products L.P.Inventors: Owen Martin, Ramesh Doddaiah
-
Patent number: 12321782Abstract: In a storage system in which processor cores are exclusively allocated to run process threads of individual emulations, the allocations of cores to emulations are dynamically reconfigured based on forecasted workload. A workload configuration model is created by testing different core allocation permutations with different workloads. The best performing permutations are stored in the model as workload configurations. The workload configurations are characterized by counts of tasks required to service the workloads. Actual task counts are monitored during normal operation and used to forecast changes in actual task counts. The forecasted task counts are compared with the task counts of the workload configurations of the model to select the best match. Allocation of cores is reconfigured to the best match workload configuration.Type: GrantFiled: June 2, 2023Date of Patent: June 3, 2025Assignee: Dell Products L.P.Inventors: Owen Martin, Ramesh Doddaiah, Michael Scharland