Patents by Inventor Manoj P. Naik

Manoj P. Naik has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9992274
    Abstract: In one embodiment, a system is configured to use an owner GW node to write data for a first fileset and determine whether to utilize one or more other GW nodes to handle at least a portion of write traffic for the first fileset, select a set of eligible GW nodes, assign and define a size for one or more write task items for each GW node based on a current dynamic profile of each GW node, provide and/or ensure availability to in-memory and/or I/O resources at each GW node in the set of eligible GW nodes to handle one or more assigned write task items, and distribute workload to the set of eligible GW nodes according to the size for each of the one or more assigned write task items for each individual GW node in the set of eligible GW nodes.
    Type: Grant
    Filed: January 20, 2017
    Date of Patent: June 5, 2018
    Assignee: International Business Machines Corporation
    Inventors: Kalyan C. Gunda, Dean Hildebrand, Manoj P. Naik, Riyazahamad M. Shiraguppi
  • Patent number: 9983947
    Abstract: A moving weighted average of application bandwidth is calculated based on updates to a first data storage by a first data site. A moving weighted average of transmission bandwidth is calculated based on replication of the updates to a second data storage via a second data site. A next coordinated consistency point is identified and the time remaining before the next consistency point is calculated. An amount of the updates that can be replicated before the next consistency point is determined based on the average transmission bandwidth. A prediction of an amount of additional updates that will be generated on the first data site before the next consistency point is made using heuristics based on the average application bandwidth. When update accumulation combined with the prediction exceeds the amount of updates that can be replicated before the next consistency point, pending updates are flushed to the second data storage.
    Type: Grant
    Filed: July 19, 2016
    Date of Patent: May 29, 2018
    Assignee: International Business Machines Corporation
    Inventors: Manoj P. Naik, Ravindra R. Sure
  • Publication number: 20180129680
    Abstract: In one embodiment, a method includes determining a home node that corresponds to gateway (GW) nodes in a clustered file system, each GW node being eligible to process one or more read tasks, determining a peer GW eligibility value for more than one of the GW nodes in the clustered file system eligible to process one or more read tasks, and determining a single GW node from amongst the GW nodes having a highest peer GW eligibility value for each home node. Additionally, the method includes assigning and defining a size for one or more read task items for the GW nodes having the highest peer GW eligibility value for multiple home nodes based on a current dynamic profile of the GW nodes, and distributing workload to the GW nodes according to the size for each of the one or more read task items assigned to the GW nodes.
    Type: Application
    Filed: November 29, 2017
    Publication date: May 10, 2018
    Inventors: Kalyan C. Gunda, Dean Hildebrand, Manoj P. Naik, Riyazahamad M. Shiraguppi
  • Publication number: 20180063273
    Abstract: A system facilitates access to data in a network and includes a cache that stores instructions. A processor executes the instructions including: caching processing configured to integrate caching into a local cluster file system, and cache local file data in the cache based on fetching file data on demand from a remote cluster file system. The cache is visible to file system clients as a Portable Operating System Interface (POSIX) compliant file system. Applications execute on a multi-node cache cluster using POSIX semantics via a POSIX compliant file system interface. Data cache is locally and remotely consistent for updates.
    Type: Application
    Filed: November 6, 2017
    Publication date: March 1, 2018
    Inventors: Rajagopal Ananthanarayanan, Marc M. Eshel, Roger L. Haskin, Dean Hildebrand, Manoj P. Naik, Frank B. Schmuck, Renu Tewari
  • Patent number: 9892130
    Abstract: In one embodiment, a method includes determining each gateway (GW) node in a clustered file system eligible to process read tasks and constructing a GW node list of all eligible GW nodes, determining a home node that corresponds to each GW node in the list, creating individual home node GW lists for each home node, with each home node GW list including a set of GW nodes which share a same home node, determining a peer GW eligibility value for each GW node, determining a GW node having a highest eligibility value for each home node, removing all other GW nodes which do not have the highest eligibility value for each home node from the list, assigning and defining a size for read task items for each GW node in the list, and distributing workload to each GW node in the list according to sizes of the read task items.
    Type: Grant
    Filed: March 17, 2017
    Date of Patent: February 13, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kalyan C. Gunda, Dean Hildebrand, Manoj P. Naik, Riyazahamad M. Shiraguppi
  • Publication number: 20180024894
    Abstract: A moving weighted average of application bandwidth is calculated based on updates to a first data storage by a first data site. A moving weighted average of transmission bandwidth is calculated based on replication of the updates to a second data storage via a second data site. A next coordinated consistency point is identified and the time remaining before the next consistency point is calculated. An amount of the updates that can be replicated before the next consistency point is determined based on the average transmission bandwidth. A prediction of an amount of additional updates that will be generated on the first data site before the next consistency point is made using heuristics based on the average application bandwidth. When update accumulation combined with the prediction exceeds the amount of updates that can be replicated before the next consistency point, pending updates are flushed to the second data storage.
    Type: Application
    Filed: July 19, 2016
    Publication date: January 25, 2018
    Inventors: Manoj P. Naik, Ravindra R. Sure
  • Patent number: 9860333
    Abstract: A system facilitates access to data in a network. One implementations includes a caching layer function that is configured to integrate into a local cluster file system, to cache local file data in a cache based on fetching file data on demand from a remote cluster file system, and to operate on a multi-node cache cluster. The cache is visible to file system clients as a Portable Operating System Interface (POSIX) compliant file system, applications execute on the multi-node cache cluster using POSIX semantics via a POSIX compliant file system interface, and data cache is locally consistent for updates made at the multi-node cache cluster using distributed locking for the data cache.
    Type: Grant
    Filed: August 10, 2015
    Date of Patent: January 2, 2018
    Assignee: International Business Machines Corporation
    Inventors: Rajagopal Ananthanarayanan, Marc M. Eshel, Roger L. Haskin, Dean Hildebrand, Manoj P. Naik, Frank B. Schmuck, Renu Tewari
  • Publication number: 20170371887
    Abstract: Data is migrated from a source storage device to a destination storage device using tape media. Both the source storage device and the destination storage device utilize disk drives to store data. A portion of data is detected migrating to the tape media. Metadata of the portion of data is changed to identify the portion of data as residing on the tape media. A prefetch command for the portion of data is detected. It is determined that the portion of data is stored on the tape media. In response to determining that the portion of data is stored on the tape media, the prefetch command is executing without recalling the portion of data to the disk drives. Instead, the portion of data is read directly from the tape media.
    Type: Application
    Filed: June 24, 2016
    Publication date: December 28, 2017
    Inventors: Shankar Balasubramanian, Manoj P. Naik, Venkateswara R. Puvvada
  • Publication number: 20170193007
    Abstract: In one embodiment, a method includes determining each gateway (GW) node in a clustered file system eligible to process read tasks and constructing a GW node list of all eligible GW nodes, determining a home node that corresponds to each GW node in the list, creating individual home node GW lists for each home node, with each home node GW list including a set of GW nodes which share a same home node, determining a peer GW eligibility value for each GW node, determining a GW node having a highest eligibility value for each home node, removing all other GW nodes which do not have the highest eligibility value for each home node from the list, assigning and defining a size for read task items for each GW node in the list, and distributing workload to each GW node in the list according to sizes of the read task items.
    Type: Application
    Filed: March 17, 2017
    Publication date: July 6, 2017
    Inventors: Kalyan C. Gunda, Dean Hildebrand, Manoj P. Naik, Riyazahamad M. Shiraguppi
  • Patent number: 9667736
    Abstract: In one embodiment, a method for providing parallel reading in a clustered file system having cache storage includes using an owner gateway node to read data for a fileset, determining whether to utilize other gateway nodes to handle a portion of read traffic for the fileset, selecting a set of eligible gateway nodes based on: a current internal workload, a network workload, and recent performance history data regarding workload distribution across the other gateway nodes, assigning and defining a size for read task items for each gateway node in the set based on a current dynamic profile of each gateway node in the set, providing in-memory and/or input/output resources at each gateway node in the set to handle assigned read task items, and distributing workload to the set of eligible gateway nodes according to the size for the assigned read task items for each gateway node in the set.
    Type: Grant
    Filed: April 29, 2014
    Date of Patent: May 30, 2017
    Assignee: International Business Machines Corporation
    Inventors: Kalyan C. Gunda, Dean Hildebrand, Manoj P. Naik, Riyazahamad M. Shiraguppi
  • Publication number: 20170134482
    Abstract: In one embodiment, a system is configured to use an owner GW node to write data for a first fileset and determine whether to utilize one or more other GW nodes to handle at least a portion of write traffic for the first fileset, select a set of eligible GW nodes, assign and define a size for one or more write task items for each GW node based on a current dynamic profile of each GW node, provide and/or ensure availability to in-memory and/or I/O resources at each GW node in the set of eligible GW nodes to handle one or more assigned write task items, and distribute workload to the set of eligible GW nodes according to the size for each of the one or more assigned write task items for each individual GW node in the set of eligible GW nodes.
    Type: Application
    Filed: January 20, 2017
    Publication date: May 11, 2017
    Inventors: Kalyan C. Gunda, Dean Hildebrand, Manoj P. Naik, Riyazahamad M. Shiraguppi
  • Patent number: 9614926
    Abstract: In one embodiment, a method includes using an owner gateway node to write data for a fileset, determining whether to utilize other gateway nodes to handle a portion of write traffic for the fileset, selecting a set of eligible gateway nodes based on: a current internal workload, a network workload, and recent performance history data in regard to workload distribution across the other gateway nodes, assigning and defining a size for one or more write task items for each gateway node in the set based on a current dynamic profile of each gateway node, providing availability to in-memory and/or I/O resources at each gateway node in the set to handle assigned write task items, and distributing workload to the set of eligible gateway nodes according to the size for each of the assigned write task items for each individual gateway node in the set of eligible gateway nodes.
    Type: Grant
    Filed: April 29, 2014
    Date of Patent: April 4, 2017
    Assignee: International Business Machines Corporation
    Inventors: Kalyan C. Gunda, Dean Hildebrand, Manoj P. Naik, Riyazahamad M. Shiraguppi
  • Publication number: 20160103850
    Abstract: The embodiments described herein relate to synchronization of data in a shared pool of configurable computer resources. One or more consistency points are created in a source filesystem. A first consistency point is compared with a second consistency point to detect a directory change at the source filesystem, which includes identifying at least one difference between the first and second consistency points. A file level change associated with an established directory at a target filesystem is identified responsive to the detection of the directory change. A link is established between the source filesystem and the target filesystem, and the established directory is updated based on the file level change.
    Type: Application
    Filed: December 15, 2015
    Publication date: April 14, 2016
    Applicant: International Business Machines Corporation
    Inventors: Karan Gupta, Manoj P. Naik, Frank B. Schmuck, Mansi A. Shah, Renu Tewari
  • Patent number: 9298625
    Abstract: Aspects of the invention are provided to support partial file caching on a file system block boundary. All read requests are converted so that offset and count are aligned on a block boundary. Data associated with read requests is first satisfied from local cache, with cache misses supported with a call to persistent or remote system. Similarly, for a write request, any partial blocks are aligned to the block boundary. Data associated with the write request is performed on local cache and placed in a queue for replay to the persistent or remote system.
    Type: Grant
    Filed: July 10, 2015
    Date of Patent: March 29, 2016
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Manoj P. Naik, Frank B. Schmuck, Renu Tewari
  • Patent number: 9235594
    Abstract: Embodiments of the invention relate to synchronization of data in a shared pool of configurable computer resources. An image of the filesystem changes, including data and metadata, is captured in the form of a consistency point. Sequential consistency points are created, with changes to data and metadata in the filesystem between sequential consistency captured and placed in a queue for communication to a target filesystem at a target site. The changes are communicated as a filesystem operation, with the communication limited to the changes captured and reflected in the consistency point.
    Type: Grant
    Filed: August 20, 2012
    Date of Patent: January 12, 2016
    Assignee: International Business Machines Corporation
    Inventors: Karan Gupta, Manoj P. Naik, Frank B. Schmuck, Mansi A. Shah, Renu Tewari
  • Publication number: 20150350366
    Abstract: A system facilitates access to data in a network. One implementations includes a caching layer function that is configured to integrate into a local cluster file system, to cache local file data in a cache based on fetching file data on demand from a remote cluster file system, and to operate on a multi-node cache cluster.
    Type: Application
    Filed: August 10, 2015
    Publication date: December 3, 2015
    Inventors: Rajagopol Ananthanarayanan, Marc M. Eshel, Roger L. Haskin, Dean Hildebrand, Manoj P. Naik, Frank B. Schmuck, Renu Tewari
  • Publication number: 20150317250
    Abstract: Aspects of the invention are provided to support partial file caching on a file system block boundary. All read requests are converted so that offset and count are aligned on a block boundary. Data associated with read requests is first satisfied from local cache, with cache misses supported with a call to persistent or remote system. Similarly, for a write request, any partial blocks are aligned to the block boundary. Data associated with the write request is performed on local cache and placed in a queue for replay to the persistent or remote system.
    Type: Application
    Filed: July 10, 2015
    Publication date: November 5, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Manoj P. Naik, Frank B. Schmuck, Renu Tewari
  • Patent number: 9176980
    Abstract: Scalable caching of remote file data in cluster file systems is provided. One implementation involves maintaining a cache in a local cluster file system and caching local file data in the cache by fetching file data on demand from the remote cluster file system into the local cached file system over the network. The local file data and metadata corresponds to the remote file data and metadata in the remote cluster file system. Updates made to the local file data and metadata are pushed back to the remote cluster file system asynchronously.
    Type: Grant
    Filed: August 8, 2012
    Date of Patent: November 3, 2015
    Assignee: International Business Machines Corporation
    Inventors: Rajagopol Ananthanarayanan, Marc M. Eshel, Roger L. Haskin, Dean Hildebrand, Manoj P. Naik, Frank B. Schmuck, Renu Tewari
  • Publication number: 20150312343
    Abstract: In one embodiment, a method for providing parallel reading in a clustered file system having cache storage includes using an owner gateway node to read data for a fileset, determining whether to utilize other gateway nodes to handle a portion of read traffic for the fileset, selecting a set of eligible gateway nodes based on: a current internal workload, a network workload, and recent performance history data regarding workload distribution across the other gateway nodes, assigning and defining a size for read task items for each gateway node in the set based on a current dynamic profile of each gateway node in the set, providing in-memory and/or input/output resources at each gateway node in the set to handle assigned read task items, and distributing workload to the set of eligible gateway nodes according to the size for the assigned read task items for each gateway node in the set.
    Type: Application
    Filed: April 29, 2014
    Publication date: October 29, 2015
    Applicant: International Business Machines Corporation
    Inventors: Kalyan C. Gunda, Dean Hildebrand, Manoj P. Naik, Riyazahamad M. Shiraguppi
  • Publication number: 20150312342
    Abstract: In one embodiment, a method includes using an owner gateway node to write data for a fileset, determining whether to utilize other gateway nodes to handle a portion of write traffic for the fileset, selecting a set of eligible gateway nodes based on: a current internal workload, a network workload, and recent performance history data in regard to workload distribution across the other gateway nodes, assigning and defining a size for one or more write task items for each gateway node in the set based on a current dynamic profile of each gateway node, providing availability to in-memory and/or I/O resources at each gateway node in the set to handle assigned write task items, and distributing workload to the set of eligible gateway nodes according to the size for each of the assigned write task items for each individual gateway node in the set of eligible gateway nodes.
    Type: Application
    Filed: April 29, 2014
    Publication date: October 29, 2015
    Applicant: International Business Machines Corporation
    Inventors: Kalyan C. Gunda, Dean Hildebrand, Manoj P. Naik, Riyazahamad M. Shiraguppi