Patents by Inventor Somasundaram Krishnasamy

Somasundaram Krishnasamy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10831369
    Abstract: A method and system for synchronizing caches after reboot are described. In a cached environment, a host server stores a cache counter associated with the cache, which can be stored in the cache itself or in another permanent storage device. When data blocks are written to the cache, metadata for each data block is also written to the cache. This metadata includes a block counter based on a value of the cache counter. After a number of data operations are performed in the cache, the value of the cache counter is updated. Then, each data block is selectively updated based on a comparison of the value of the cache counter with a value of the block counter in the metadata for the corresponding data block.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: November 10, 2020
    Assignee: NETAPP, INC.
    Inventors: Somasundaram Krishnasamy, Brian McKean, Yanling Qi
  • Publication number: 20180129421
    Abstract: A method and system for synchronizing caches after reboot are described. In a cached environment, a host server stores a cache counter associated with the cache, which can be stored in the cache itself or in another permanent storage device. When data blocks are written to the cache, metadata for each data block is also written to the cache. This metadata includes a block counter based on a value of the cache counter. After a number of data operations are performed in the cache, the value of the cache counter is updated. Then, each data block is selectively updated based on a comparison of the value of the cache counter with a value of the block counter in the metadata for the corresponding data block.
    Type: Application
    Filed: November 22, 2017
    Publication date: May 10, 2018
    Inventors: Somasundaram Krishnasamy, Brian Mckean, Yanling Qi
  • Patent number: 9830081
    Abstract: A method and system for synchronizing caches after reboot are described. In a cached environment, a host server stores a cache counter associated with the cache, which can be stored in the cache itself or in another permanent storage device. When data blocks are written to the cache, metadata for each data block is also written to the cache. This metadata includes a block counter based on a value of the cache counter. After a number of data operations are performed in the cache, the value of the cache counter is updated. Then, each data block is selectively updated based on a comparison of the value of the cache counter with a value of the block counter in the metadata for the corresponding data block.
    Type: Grant
    Filed: January 16, 2015
    Date of Patent: November 28, 2017
    Assignee: NetApp, Inc.
    Inventors: Somasundaram Krishnasamy, Brian McKean, Yanling Qi
  • Publication number: 20170220476
    Abstract: A method includes: communicating read requests from a host device to either a storage array controller or a data cache associated with the host device; classifying portions of data, in response to the read requests, according to frequency of access of the respective portions of data; and causing the storage array controller to either promote a first portion of data to a data cache associated with the storage array controller or demote the first portion of data from the data cache associated with the storage array controller in response to a change in cache status of the first portion of data at the data cache associated with the host device and in response to frequency of access of the first portion of data.
    Type: Application
    Filed: January 29, 2016
    Publication date: August 3, 2017
    Inventors: Yanling Qi, Junjie Qian, Somasundaram Krishnasamy
  • Publication number: 20160210055
    Abstract: A method and system for synchronizing caches after reboot are described. In a cached environment, a host server stores a cache counter associated with the cache, which can be stored in the cache itself or in another permanent storage device. When data blocks are written to the cache, metadata for each data block is also written to the cache. This metadata includes a block counter based on a value of the cache counter. After a number of data operations are performed in the cache, the value of the cache counter is updated. Then, each data block is selectively updated based on a comparison of the value of the cache counter with a value of the block counter in the metadata for the corresponding data block.
    Type: Application
    Filed: January 16, 2015
    Publication date: July 21, 2016
    Inventors: Somasundaram Krishnasamy, Brian McKean, Yanling Qi
  • Publication number: 20160212198
    Abstract: A method and system for host caches managed in a unified manner are described. In an example, a server in a clustered environment designates cache ownership for a cluster application to the cache on one of the hosts. While the application is running on this host, the server monitors data writes made by the application. Upon detecting that the application is running on a different host in the clustered environment, the server can transfer cache ownership to the new host and selectively invalidate cache blocks in the cache of the new host based on the data writes that were previously monitored.
    Type: Application
    Filed: January 16, 2015
    Publication date: July 21, 2016
    Inventors: Somasundaram Krishnasamy, Brian McKean, Yanling Qi
  • Publication number: 20150363319
    Abstract: Examples described herein include a system for storing data. The data storage system retrieves a first set of metadata associated with data stored on a first cache memory, and stores the first set of metadata on a primary storage device. The primary storage device is a backing store for the data stored on the first cache memory. The storage system selectively copies data form the primary storage device to a second cache memory based, at least in part, on the first set of metadata stored on the primary storage device. For some aspects, the storage system may copy the data from the primary storage device to the second cache memory upon determining that the first cache memory is in a failover state.
    Type: Application
    Filed: June 12, 2014
    Publication date: December 17, 2015
    Inventors: Yanling Qi, Brian McKean, Somasundaram Krishnasamy, Dennis Hahn
  • Patent number: 8713362
    Abstract: Embodiments comprise a plurality of computing devices that dynamically intercept process application I/O errors. Various embodiments comprise two or more computing devices, such as two or more servers, each having access to a shared data storage system. An application may be executing on the first computing device and performing an I/O operation when an I/O error occurs. The first computing device may intercept the I/O error, rather than passing it back to the application, and prevent the error from affecting the application. The first computing device may complete the I/O operation, and any other pending I/O operations not written to disk, via an alternate path, perform a checkpoint operation to capture the state of the set of processes associated with the application, and transfer the checkpoint image to the second computing device. The second computing device may resume operation of the application from the checkpoint image.
    Type: Grant
    Filed: December 1, 2010
    Date of Patent: April 29, 2014
    Assignee: International Business Machines Corporation
    Inventors: Douglas J. Griffith, Angela A. Jaehde, Somasundaram Krishnasamy, Stephen A. Schlachter
  • Patent number: 8356099
    Abstract: A method, programmed medium and system are disclosed which provide for end-to-end QoS for a set of processes that comprise a workload over nfs. A set of processes that comprise a workload such as the processes of a WPAR, or an entire LPAR are given a class designation and assigned priority/limits. The data are then passed to the server which allocates resources based on the sum total of all the current classes and their priorities and/or limits. This requires re-engineering the nfs client code to be workload-aware and the nfs server code to accommodate the resource allocation and prioritization needs of the nfs clients.
    Type: Grant
    Filed: April 29, 2012
    Date of Patent: January 15, 2013
    Assignee: International Business Machines Corporation
    Inventors: Adekunle Bello, Douglas Griffith, Somasundaram Krishnasamy, Aruna Yedavilli
  • Patent number: 8321569
    Abstract: A method, programmed medium and system are disclosed which provide for end-to-end QoS for a set of processes that comprise a workload over nfs. A set of processes that comprise a workload such as the processes of a WPAR, or an entire LPAR are given a class designation and assigned priority/limits. The data are then passed to the server which allocates resources based on the sum total of all the current classes and their priorities and/or limits. This requires re-engineering the nfs client code to be workload-aware and the nfs server code to accommodate the resource allocation and prioritization needs of the nfs clients.
    Type: Grant
    Filed: December 17, 2009
    Date of Patent: November 27, 2012
    Assignee: International Business Machines Corporation
    Inventors: Adekunle Bello, Douglas Griffith, Somasundaram Krishnasamy, Aruna Yedavilli
  • Publication number: 20120215922
    Abstract: A method, programmed medium and system are disclosed which provide for end-to-end QoS for a set of processes that comprise a workload over nfs. A set of processes that comprise a workload such as the processes of a WPAR, or an entire LPAR are given a class designation and assigned priority/limits. The data are then passed to the server which allocates resources based on the sum total of all the current classes and their priorities and/or limits. This requires re-engineering the nfs client code to be workload-aware and the nfs server code to accommodate the resource allocation and prioritization needs of the nfs clients.
    Type: Application
    Filed: April 29, 2012
    Publication date: August 23, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Adekunle Bello, Douglas Griffith, Somasundaram Krishnasamy, Aruna Yedavilli
  • Publication number: 20120144233
    Abstract: Embodiments comprise a plurality of computing devices that dynamically intercept process application I/O errors. Various embodiments comprise two or more computing devices, such as two or more servers, each having access to a shared data storage system. An application may be executing on the first computing device and performing an I/O operation when an I/O error occurs. The first computing device may intercept the I/O error, rather than passing it back to the application, and prevent the error from affecting the application. The first computing device may complete the I/O operation, and any other pending I/O operations not written to disk, via an alternate path, perform a checkpoint operation to capture the state of the set of processes associated with the application, and transfer the checkpoint image to the second computing device. The second computing device may resume operation of the application from the checkpoint image.
    Type: Application
    Filed: December 1, 2010
    Publication date: June 7, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Douglas J. Griffith, Angela A. Jaehde, Somasundaram Krishnasamy, Stephen A. Schlachter
  • Publication number: 20110153825
    Abstract: A method, programmed medium and system are disclosed which provide for end-to-end QoS for a set of processes that comprise a workload over nfs. A set of processes that comprise a workload such as the processes of a WPAR, or an entire LPAR are given a class designation and assigned priority/limits. The data are then passed to the server which allocates resources based on the sum total of all the current classes and their priorities and/or limits. This requires re-engineering the nfs client code to be workload-aware and the nfs server code to accommodate the resource allocation and prioritization needs of the nfs clients.
    Type: Application
    Filed: December 17, 2009
    Publication date: June 23, 2011
    Applicant: International Business Machines Corporation
    Inventors: Adekunle Bello, Douglas Griffith, Somasundaram Krishnasamy, Aruna Yedavilli