Patents by Inventor Ravindra R. Sure
Ravindra R. Sure has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11675539Abstract: A computational device configures a storage system that supports a plurality of submission queues. A file system monitors characteristics of received writes to distribute the writes among the plurality of submission queues. The computational device categorizes the writes into full track writes, medium track writes, and small track writes, measures a frequency of different categories of writes determined based on the categorization of the writes, and generates arbitrations of the writes with varying priorities for distributing the writes for processing in the submission queues. A full track write includes writing incoming data blocks of the writes received to a fresh track, in response to a total size of the incoming data blocks being equal to or more than a size of one full track. A medium track write includes overwriting an existing data track. A small track write includes staging the incoming data blocks to a caching storage.Type: GrantFiled: June 3, 2021Date of Patent: June 13, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ravindra R. Sure, Samrat P. Kannikar, Sukumar Vankadhara, Sasikanth Eda
-
Publication number: 20220391135Abstract: A computational device configures a storage system that supports a plurality of submission queues. A file system of the computational device monitors characteristics of writes received from an application to distribute the writes among the plurality of submission queues of the storage system.Type: ApplicationFiled: June 3, 2021Publication date: December 8, 2022Inventors: Ravindra R. Sure, Samrat P. Kannikar, Sukumar Vankadhara, Sasikanth Eda
-
Publication number: 20190163370Abstract: A data recovery (DR) system where local backup (for example, synchronized snapshotting) is performed based on one or more recovery parameters including at least one of the following recovery data objective (RDO) type and/or recovery data block objective (RDBO) type. A recovery point objective (RPO) type parameter may additionally and concurrently used as an alternative local backup trigger.Type: ApplicationFiled: November 28, 2017Publication date: May 30, 2019Inventors: Ravindra R. Sure, Srikanth Srinivasan, Durgesh
-
Patent number: 10210053Abstract: A moving weighted average of application bandwidth is calculated based on updates to a first data storage by a first data site. A moving weighted average of transmission bandwidth is calculated based on replication of the updates to a second data storage via a second data site. A next coordinated consistency point is identified and the time remaining before the next consistency point is calculated. An amount of the updates that can be replicated before the next consistency point is determined based on the average transmission bandwidth. A prediction of an amount of additional updates that will be generated on the first data site before the next consistency point is made using heuristics based on the average application bandwidth. When update accumulation combined with the prediction exceeds the amount of updates that can be replicated before the next consistency point, pending updates are flushed to the second data storage.Type: GrantFiled: April 2, 2018Date of Patent: February 19, 2019Assignee: International Business Machines CorporationInventors: Manoj P. Naik, Ravindra R. Sure
-
Publication number: 20180225180Abstract: A moving weighted average of application bandwidth is calculated based on updates to a first data storage by a first data site. A moving weighted average of transmission bandwidth is calculated based on replication of the updates to a second data storage via a second data site. A next coordinated consistency point is identified and the time remaining before the next consistency point is calculated. An amount of the updates that can be replicated before the next consistency point is determined based on the average transmission bandwidth. A prediction of an amount of additional updates that will be generated on the first data site before the next consistency point is made using heuristics based on the average application bandwidth. When update accumulation combined with the prediction exceeds the amount of updates that can be replicated before the next consistency point, pending updates are flushed to the second data storage.Type: ApplicationFiled: April 2, 2018Publication date: August 9, 2018Inventors: Manoj P. Naik, Ravindra R. Sure
-
Patent number: 10031775Abstract: Backfill scheduling for embarrassingly parallel jobs. A disclosed method includes: receiving an initial schedule having a plurality of jobs scheduled over time on a plurality of nodes, determining that a first job can be split into a plurality of sub-tasks that can respectively be performed in parallel on different nodes, splitting the first job into the plurality of sub-tasks, and moving a first sub-task from its position in the initial schedule to a new position to yield a first revised schedule.Type: GrantFiled: October 27, 2016Date of Patent: July 24, 2018Assignee: International Business Machines CorporationInventors: Manish Modani, Giridhar M. Prabhakar, Ravindra R. Sure
-
Patent number: 9983947Abstract: A moving weighted average of application bandwidth is calculated based on updates to a first data storage by a first data site. A moving weighted average of transmission bandwidth is calculated based on replication of the updates to a second data storage via a second data site. A next coordinated consistency point is identified and the time remaining before the next consistency point is calculated. An amount of the updates that can be replicated before the next consistency point is determined based on the average transmission bandwidth. A prediction of an amount of additional updates that will be generated on the first data site before the next consistency point is made using heuristics based on the average application bandwidth. When update accumulation combined with the prediction exceeds the amount of updates that can be replicated before the next consistency point, pending updates are flushed to the second data storage.Type: GrantFiled: July 19, 2016Date of Patent: May 29, 2018Assignee: International Business Machines CorporationInventors: Manoj P. Naik, Ravindra R. Sure
-
Publication number: 20180024894Abstract: A moving weighted average of application bandwidth is calculated based on updates to a first data storage by a first data site. A moving weighted average of transmission bandwidth is calculated based on replication of the updates to a second data storage via a second data site. A next coordinated consistency point is identified and the time remaining before the next consistency point is calculated. An amount of the updates that can be replicated before the next consistency point is determined based on the average transmission bandwidth. A prediction of an amount of additional updates that will be generated on the first data site before the next consistency point is made using heuristics based on the average application bandwidth. When update accumulation combined with the prediction exceeds the amount of updates that can be replicated before the next consistency point, pending updates are flushed to the second data storage.Type: ApplicationFiled: July 19, 2016Publication date: January 25, 2018Inventors: Manoj P. Naik, Ravindra R. Sure
-
Publication number: 20170046201Abstract: Backfill scheduling for embarrassingly parallel jobs. A disclosed method includes: receiving an initial schedule having a plurality of jobs scheduled over time on a plurality of nodes, determining that a first job can be split into a plurality of sub-tasks that can respectively be performed in parallel on different nodes, splitting the first job into the plurality of sub-tasks, and moving a first sub-task from its position in the initial schedule to a new position to yield a first revised schedule.Type: ApplicationFiled: October 27, 2016Publication date: February 16, 2017Inventors: Manish Modani, Giridhar M. Prabhakar, Ravindra R. Sure
-
Patent number: 9569262Abstract: Backfill scheduling for embarrassingly parallel jobs. A disclosed method includes: receiving an initial schedule having a plurality of jobs scheduled over time on a plurality of nodes, determining that a first job can be split into a plurality of sub-tasks that can respectively be performed in parallel on different nodes, splitting the first job into the plurality of sub-tasks, and moving a first sub-task from its position in the initial schedule to a new position to yield a first revised schedule.Type: GrantFiled: June 19, 2014Date of Patent: February 14, 2017Assignee: International Business Machines CorporationInventors: Manish Modani, Giridhar M. Prabhakar, Ravindra R. Sure
-
Patent number: 9563470Abstract: Backfill scheduling for embarrassingly parallel jobs. A disclosed method includes: receiving an initial schedule having a plurality of jobs scheduled over time on a plurality of nodes, determining that a first job can be split into a plurality of sub-tasks that can respectively be performed in parallel on different nodes, splitting the first job into the plurality of sub-tasks, and moving a first sub-task from its position in the initial schedule to a new position to yield a first revised schedule.Type: GrantFiled: December 23, 2013Date of Patent: February 7, 2017Assignee: International Business Machines CorporationInventors: Manish Modani, Giridhar M. Prabhakar, Ravindra R. Sure
-
Publication number: 20150178126Abstract: Backfill scheduling for embarrassingly parallel jobs. A disclosed method includes: receiving an initial schedule having a plurality of jobs scheduled over time on a plurality of nodes, determining that a first job can be split into a plurality of sub-tasks that can respectively be performed in parallel on different nodes, splitting the first job into the plurality of sub-tasks, and moving a first sub-task from its position in the initial schedule to a new position to yield a first revised schedule.Type: ApplicationFiled: June 19, 2014Publication date: June 25, 2015Inventors: Manish Modani, Giridhar M. Prabhakar, Ravindra R. Sure
-
Publication number: 20150178124Abstract: Backfill scheduling for embarrassingly parallel jobs. A disclosed method includes: receiving an initial schedule having a plurality of jobs scheduled over time on a plurality of nodes, determining that a first job can be split into a plurality of sub-tasks that can respectively be performed in parallel on different nodes, splitting the first job into the plurality of sub-tasks, and moving a first sub-task from its position in the initial schedule to a new position to yield a first revised schedule.Type: ApplicationFiled: December 23, 2013Publication date: June 25, 2015Applicant: International Business Machines CorporationInventors: Manish Modani, Giridhar M. Prabhakar, Ravindra R. Sure
-
Patent number: 8381212Abstract: The present invention employs a master node for each job to be scheduled and in turn the master node distributes job start information and executable tasks to a plurality of nodes configured in a hierarchical node tree of a multinode job scheduling system. The status of the various tasks executing at the leaf nodes and other nodes of the tree report status back up the same hierarchical tree structure used to start the job, not to a scheduling agent but rather to the master node which has been established by the scheduling agent as the focal point, not only for job starting, but also for the reporting of status information from the leaf and other nodes in the tree.Type: GrantFiled: October 9, 2007Date of Patent: February 19, 2013Assignee: International Business Machines CorporationInventors: David P. Brelsford, Waiman Chan, Stephen C. Hughes, Kailash N. Marthi, Ravindra R. Sure
-
Publication number: 20090094605Abstract: The present invention employs a master node for each job to be scheduled and in turn the master node distributes job start information and executable tasks to a plurality of nodes configured in a hierarchical node tree of a multinode job scheduling system. The status of the various tasks executing at the leaf nodes and other nodes of the tree report status back up the same hierarchical tree structure used to start the job, not to a scheduling agent but rather to the master node which has been established by the scheduling agent as the focal point, not only for job starting, but also for the reporting of status information from the leaf and other nodes in the tree.Type: ApplicationFiled: October 9, 2007Publication date: April 9, 2009Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: David P. Brelsford, Waiman Chan, Stephen C. Hughes, Kailash N. Marthi, Ravindra R. Sure