Patents by Inventor Sai Rama Krishna Susarla

Sai Rama Krishna Susarla has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10798207
    Abstract: A system and method for managing application performance includes a storage controller including a memory containing machine readable medium comprising machine executable code having stored thereon instructions for performing a method of managing application performance and a processor coupled to the memory. The processor is configured to execute the machine executable code to receive storage requests from a plurality of first applications via a network interface, manage QoS settings for the storage controller and the first applications, and in response to receiving an accelerate command associated with a second application from the first applications, increase a first share of a storage resource allocated to the second application, decrease unlocked second shares of the storage resource of the first applications, and lock the first share. The storage resource is a request queue or a first cache. In some embodiments, the second application is a throughput application or a latency application.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: October 6, 2020
    Assignee: NETAPP, INC.
    Inventors: Sai Rama Krishna Susarla, Scott Hubbard, William Patrick Delaney, Rodney A. Dekoning
  • Patent number: 10320907
    Abstract: A system and method for scheduling the pre-loading of long-term data predicted to be requested in future time epochs into a faster storage tier are disclosed. For each epoch into the future, which may be on the order of minutes or hours, data chunks which may be accessed are predicted. Intersections are taken between predicted data chunks, starting with the furthest predicted epoch in the future, ranging back to the next future epoch. These are then intersected with adjacent results, on up a hierarchy until an intersection is taken of all of the predicted epochs. Commands are generated to preload the data chunks predicted to have the most recurring accesses, and the predicted data chunks are pre-loaded into the cache. This proceeds down the load order until either the last predicted data set is pre-loaded or it is determined that the cache has run out of space.
    Type: Grant
    Filed: September 26, 2016
    Date of Patent: June 11, 2019
    Assignee: NETAPP, INC.
    Inventors: Sai Rama Krishna Susarla, Pooja Garg
  • Publication number: 20180176323
    Abstract: A system and method for managing application performance includes a storage controller including a memory containing machine readable medium comprising machine executable code having stored thereon instructions for performing a method of managing application performance and a processor coupled to the memory. The processor is configured to execute the machine executable code to receive storage requests from a plurality of first applications via a network interface, manage QoS settings for the storage controller and the first applications, and in response to receiving an accelerate command associated with a second application from the first applications, increase a first share of a storage resource allocated to the second application, decrease unlocked second shares of the storage resource of the first applications, and lock the first share. The storage resource is a request queue or a first cache. In some embodiments, the second application is a throughput application or a latency application.
    Type: Application
    Filed: February 13, 2018
    Publication date: June 21, 2018
    Inventors: Sai Rama Krishna Susarla, Scott Hubbard, William Patrick Delaney, Rodney A. Dekoning
  • Publication number: 20180091593
    Abstract: A system and method for scheduling the pre-loading of long-term data predicted to be requested in future time epochs into a faster storage tier are disclosed. For each epoch into the future, which may be on the order of minutes or hours, data chunks which may be accessed are predicted. Intersections are taken between predicted data chunks, starting with the furthest predicted epoch in the future, ranging back to the next future epoch. These are then intersected with adjacent results, on up a hierarchy until an intersection is taken of all of the predicted epochs. Commands are generated to preload the data chunks predicted to have the most recurring accesses, and the predicted data chunks are pre-loaded into the cache. This proceeds down the load order until either the last predicted data set is pre-loaded or it is determined that the cache has run out of space.
    Type: Application
    Filed: September 26, 2016
    Publication date: March 29, 2018
    Inventors: Sai Rama Krishna Susarla, Pooja Garg
  • Patent number: 9930133
    Abstract: A system and method for managing application performance includes a storage controller including a memory containing machine readable medium comprising machine executable code having stored thereon instructions for performing a method of managing application performance and a processor coupled to the memory. The processor is configured to execute the machine executable code to receive storage requests from a plurality of first applications via a network interface, manage QoS settings for the storage controller and the first applications, and in response to receiving an accelerate command associated with a second application from the first applications, increase a first share of a storage resource allocated to the second application, decrease unlocked second shares of the storage resource of the first applications, and lock the first share. The storage resource is a request queue or a first cache. In some embodiments, the second application is a throughput application or a latency application.
    Type: Grant
    Filed: October 23, 2014
    Date of Patent: March 27, 2018
    Assignee: NetApp, Inc.
    Inventors: Sai Rama Krishna Susarla, Scott Hubbard, William Patrick Delaney, Rodney A. Dekoning
  • Patent number: 9832270
    Abstract: A system and method for determining I/O performance headroom that accounts for a real-world workload is provided. In some embodiments, a computing device is provided that is operable to identify a data transaction received by a storage system and directed to a storage device. The computing system identifies an attribute of the data transaction relating to a performance cost of the data transaction and queries a performance profile to determine a benchmark performance level for the storage device. The computing system determines a benchmark performance level for the storage system based on the benchmark performance level for the storage device and compares a metric of the performance of the data transaction with the storage system benchmark performance level to determine remaining headroom of the storage system.
    Type: Grant
    Filed: November 11, 2014
    Date of Patent: November 28, 2017
    Assignee: NetApp, Inc.
    Inventors: Sai Rama Krishna Susarla, Charles D. Binford, Vishal Kumawat
  • Patent number: 9779004
    Abstract: Systems and methods for efficient input/output (I/O) workload capture are provided. For example, in one aspect, a machine implemented method includes: opening a network socket for listening to a connection request from a computing device; accepting the connection request from the computing device over the network socket; enabling selective data collection based on a network connection with the computing device over the network socket, where the network connection based selective data collection includes obtaining information regarding a plurality of input/output (I/O) requests and responses and performance information of a storage server for processing the I/O requests; sub-sampling the network connection based collected data; and sending at least a portion of the network connection based collected data over the network socket connection to the computing device.
    Type: Grant
    Filed: March 23, 2015
    Date of Patent: October 3, 2017
    Assignee: NETAPP, INC.
    Inventors: Sai Rama Krishna Susarla, Joseph G. Moore, Gerald James Fredin
  • Publication number: 20170004093
    Abstract: A system and method of cache monitoring in storage systems includes storing storage blocks in a cache memory. Each of the storage blocks is associated with status indicators. As requests are received at the cache memory, the requests are processed and the status indicators associated with the storage blocks are updated in response to the processing of the requests. One or more storage blocks are selected for eviction when a storage block limit is reached. As ones of the selected one or more storage blocks are evicted from the cache memory, the block counters are updated based on the status indicators associated with the evicted storage blocks. Each of the block counters is associated with a corresponding combination of the status indicators. Caching statistics are periodically updated based on the block counters.
    Type: Application
    Filed: September 13, 2016
    Publication date: January 5, 2017
    Inventors: Sai Rama Krishna Susarla, Girish Kumar B K
  • Patent number: 9501420
    Abstract: A system and method for recognizing data access patterns in large data sets and for preloading a cache based on the recognized patterns is provided. In some embodiments, the method includes receiving a data transaction directed to an address space and recording the data transaction in a first set of counters and in a second set of counters. The first set of counters divides the address space into address ranges of a first size, whereas the second set of counters divides the address space into address ranges of a second size that is different from the first size. One of a storage device or a cache thereof is selected to service the data transaction based on the first set of counters, and data is preloaded into the cache based on the second set of counters.
    Type: Grant
    Filed: October 22, 2014
    Date of Patent: November 22, 2016
    Assignee: NETAPP, INC.
    Inventors: Sai Rama Krishna Susarla, Sandeep Kumar Reddy Ummadi, William Patrick Delaney
  • Patent number: 9471510
    Abstract: A system and method of cache monitoring in storage systems includes storing storage blocks in a cache memory. Each of the storage blocks is associated with status indicators. As requests are received at the cache memory, the requests are processed and the status indicators associated with the storage blocks are updated in response to the processing of the requests. One or more storage blocks are selected for eviction when a storage block limit is reached. As ones of the selected one or more storage blocks are evicted from the cache memory, the block counters are updated based on the status indicators associated with the evicted storage blocks. Each of the block counters is associated with a corresponding combination of the status indicators. Caching statistics are periodically updated based on the block counters.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: October 18, 2016
    Assignee: NETAPP, INC.
    Inventors: Sai Rama Krishna Susarla, Girish Kumar B K
  • Publication number: 20160283340
    Abstract: Systems and methods for efficient input/output (I/O) workload capture are provided. For example, in one aspect, a machine implemented method includes: opening a network socket for listening to a connection request from a computing device; accepting the connection request from the computing device over the network socket; enabling selective data collection based on a network connection with the computing device over the network socket, where the network connection based selective data collection includes obtaining information regarding a plurality of input/output (I/O) requests and responses and performance information of a storage server for processing the I/O requests; sub-sampling the network connection based collected data; and sending at least a portion of the network connection based collected data over the network socket connection to the computing device.
    Type: Application
    Filed: March 23, 2015
    Publication date: September 29, 2016
    Applicant: NETAPP, INC.
    Inventors: Sai Rama Krishna Susarla, Joseph G. Moore, Gerald James Fredin
  • Patent number: 9406029
    Abstract: Described herein is a system and method for dynamically managing service-level objectives (SLOs) for workloads of a cluster storage system. Proposed states/solutions of the cluster may be produced and evaluated to select one that achieves the SLOs for each workload. A planner engine may produce a state tree comprising nodes, each node representing a proposed state/solution. New nodes may be added to the state tree based on new solution types that are permitted, or nodes may be removed based on a received time constraint for executing a proposed solution or a client certification of a solution. The planner engine may call an evaluation engine to evaluate proposed states, the evaluation engine using an evaluation function that considers SLO, cost, and optimization goal characteristics to produce a single evaluation value for each proposed state. The planner engine may call a modeler engine that is trained using machine learning techniques.
    Type: Grant
    Filed: December 30, 2013
    Date of Patent: August 2, 2016
    Assignee: NETAPP, INC.
    Inventors: Sai Rama Krishna Susarla, Kaladhar Voruganti, Vipul Mathur
  • Publication number: 20160134493
    Abstract: A system and method for determining I/O performance headroom that accounts for a real-world workload is provided. In some embodiments, a computing device is provided that is operable to identify a data transaction received by a storage system and directed to a storage device. The computing system identifies an attribute of the data transaction relating to a performance cost of the data transaction and queries a performance profile to determine a benchmark performance level for the storage device. The computing system determines a benchmark performance level for the storage system based on the benchmark performance level for the storage device and compares a metric of the performance of the data transaction with the storage system benchmark performance level to determine remaining headroom of the storage system.
    Type: Application
    Filed: November 11, 2014
    Publication date: May 12, 2016
    Inventors: Sai Rama Krishna Susarla, Charles D. Binford, Vishal Kumawat
  • Publication number: 20160119443
    Abstract: A system and method for managing application performance includes a storage controller including a memory containing machine readable medium comprising machine executable code having stored thereon instructions for performing a method of managing application performance and a processor coupled to the memory. The processor is configured to execute the machine executable code to receive storage requests from a plurality of first applications via a network interface, manage QoS settings for the storage controller and the first applications, and in response to receiving an accelerate command associated with a second application from the first applications, increase a first share of a storage resource allocated to the second application, decrease unlocked second shares of the storage resource of the first applications, and lock the first share. The storage resource is a request queue or a first cache. In some embodiments, the second application is a throughput application or a latency application.
    Type: Application
    Filed: October 23, 2014
    Publication date: April 28, 2016
    Inventors: Sai Rama Krishna Susarla, Scott Hubbard, William Patrick Delaney, Rodney A. Dekoning
  • Publication number: 20160117254
    Abstract: A system and method for recognizing data access patterns in large data sets and for preloading a cache based on the recognized patterns is provided. In some embodiments, the method includes receiving a data transaction directed to an address space and recording the data transaction in a first set of counters and in a second set of counters. The first set of counters divides the address space into address ranges of a first size, whereas the second set of counters divides the address space into address ranges of a second size that is different from the first size. One of a storage device or a cache thereof is selected to service the data transaction based on the first set of counters, and data is preloaded into the cache based on the second set of counters.
    Type: Application
    Filed: October 22, 2014
    Publication date: April 28, 2016
    Inventors: Sai Rama Krishna Susarla, Sandeep Kumar Reddy Ummadi, William Patrick Delaney
  • Patent number: 9122739
    Abstract: Described herein is a system and method for dynamically managing service-level objectives (SLOs) for workloads of a cluster storage system. Proposed states/solutions of the cluster may be produced and evaluated to select one that achieves the SLOs for each workload. A planner engine may produce a state tree comprising nodes, each node representing a proposed state/solution. New nodes may be added to the state tree based on new solution types that are permitted, or nodes may be removed based on a received time constraint for executing a proposed solution or a client certification of a solution. The planner engine may call an evaluation engine to evaluate proposed states, the evaluation engine using an evaluation function that considers SLO, cost, and optimization goal characteristics to produce a single evaluation value for each proposed state. The planner engine may call a modeler engine that is trained using machine learning techniques.
    Type: Grant
    Filed: January 28, 2011
    Date of Patent: September 1, 2015
    Assignee: NetApp, Inc.
    Inventors: Neeraja Yadwadkar, Sai Rama Krishna Susarla, Kaladhar Voruganti, Rukma Ameet Talwadker, Vipul Mathur
  • Publication number: 20150178207
    Abstract: A system and method of cache monitoring in storage systems includes storing storage blocks in a cache memory. Each of the storage blocks is associated with status indicators. As requests are received at the cache memory, the requests are processed and the status indicators associated with the storage blocks are updated in response to the processing of the requests. One or more storage blocks are selected for eviction when a storage block limit is reached. As ones of the selected one or more storage blocks are evicted from the cache memory, the block counters are updated based on the status indicators associated with the evicted storage blocks. Each of the block counters is associated with a corresponding combination of the status indicators. Caching statistics are periodically updated based on the block counters.
    Type: Application
    Filed: December 20, 2013
    Publication date: June 25, 2015
    Applicant: NETAPP, INC.
    Inventors: Sai Rama Krishna Susarla, Girish Kumar B K
  • Publication number: 20150066471
    Abstract: Example embodiments provide various techniques for modeling network storage environments. To model a particular storage environment, component models that are associated with the components of the storage environment are loaded. Each component model is programmed to mathematically simulate one or more components of the storage environment. A system model is then composed from the component models and this system model is configured to simulate the storage environment.
    Type: Application
    Filed: August 20, 2014
    Publication date: March 5, 2015
    Inventors: Sai Rama Krishna Susarla, Thirumale Niranjan, Siddhartha Nandi, Craig Fulmer Everhart, Kaladhar Voruganti, Jim Voll
  • Patent number: 8868400
    Abstract: Example embodiments provide various techniques for modeling network storage environments. To model a particular storage environment, component models that are associated with the components of the storage environment are loaded. Each component model is programmed to mathematically simulate one or more components of the storage environment. A system model is then composed from the component models and this system model is configured to simulate the storage environment.
    Type: Grant
    Filed: April 30, 2008
    Date of Patent: October 21, 2014
    Assignee: NetApp, Inc.
    Inventors: Sai Rama Krishna Susarla, Thirumale Niranjan, Siddhartha Nandi, Craig Fulmer Everhart, Kaladhar Voruganti, Jim Voll
  • Patent number: 8856335
    Abstract: Described herein is a system and method for dynamically managing service-level objectives (SLOs) for workloads of a cluster storage system. Proposed states/solutions of the cluster may be produced and evaluated to select one that achieves the SLOs for each workload. A planner engine may produce a state tree comprising nodes, each node representing a proposed state/solution. New nodes may be added to the state tree based on new solution types that are permitted, or nodes may be removed based on a received time constraint for executing a proposed solution or a client certification of a solution. The planner engine may call an evaluation engine to evaluate proposed states, the evaluation engine using an evaluation function that considers SLO, cost, and optimization goal characteristics to produce a single evaluation value for each proposed state. The planner engine may call a modeler engine that is trained using machine learning techniques.
    Type: Grant
    Filed: January 28, 2011
    Date of Patent: October 7, 2014
    Assignee: Netapp, Inc.
    Inventors: Neeraja Yadwadkar, Sai Rama Krishna Susarla, Kaladhar Voruganti, Rukma Ameet Talwadker, Vipul Mathur, Lakshmi Narayanan Bairavasundaram