Patents by Inventor Srikumar Venugopal

Srikumar Venugopal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230306481
    Abstract: In an approach for storage, search, acquisition, and composition of a digital artifact, a processor obtains the digital artifact in a digital marketplace platform. The digital artifact is a collection of digital data with automatically generated and verifiable provenance and usage data. A processor transforms the digital artifact to define an access privilege. A processor shares the digital artifact in the digital marketplace platform by providing a view of a catalogue including the digital artifact. A processor authorizes a usage request based on the access privilege. A processor rewards a source of the digital artifact based on the usage of the digital artifact.
    Type: Application
    Filed: March 24, 2022
    Publication date: September 28, 2023
    Inventors: Vasileios Vasileiadis, Srikumar Venugopal, Stefano Braghin, Christian Pinto, Michael Johnston, Yiannis Gkoufas
  • Patent number: 11755543
    Abstract: A computer implemented method for optimizing performance of workflow includes associating each of a plurality of workflow nodes in a workflow with a data cache and managing the data cache on a local storage device on one of one or more compute nodes. A scheduler can request execution of the tasks of a given one of the plurality of workflow nodes on one of the one of more compute nodes that hosts the data cache associated with the given one of the plurality of workflow nodes. Each of the plurality of workflow nodes is permitted to access a distributed filesystem that is visible to each of the plurality of compute nodes. The data cache stores data produced by the tasks of the given one of the plurality of workflow nodes.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: September 12, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Vasileios Vasileiadis, Christian Pinto, Michael Johnston, Ioannis Gkoufas, Srikumar Venugopal
  • Publication number: 20230096276
    Abstract: A method, system, and computer program product for garbage collection of redundant partitions in distributed data management systems are provided. The method stores data across a set of nodes with the data being stored using one or more partitions and the data and the one or more partitions are replicated across the set of nodes. A first partition is determined to be stale at a first node of the set of nodes. The first partition is marked for deletion locally at the first node. A set of deletion votes are determined for the first partition with each node being associated with a deletion vote. The method determines a deletion decision for the first partition on the first node based on the set of deletion votes.
    Type: Application
    Filed: September 24, 2021
    Publication date: March 30, 2023
    Inventors: SRIKUMAR VENUGOPAL, STEFANO BRAGHIN
  • Patent number: 11605028
    Abstract: Embodiments for processing data with multiple machine learning models are provided. Input data is received. The input data is caused to be evaluated by a first machine learning model to generate a first inference result. The first inference result is compared to at least one quality of service (QoS) parameter. Based on the comparison of the first inference result to the at least one QoS parameter, the input data is caused to be evaluated by a second machine learning model to generate a second inference result.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: March 14, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michele Gazzetti, Srikumar Venugopal, Christian Pinto
  • Publication number: 20230014344
    Abstract: A computer-implemented method, a computer program product, and a computer system for determining optimal data access for deep learning applications on a cluster. A server determines candidate cache locations for one or more compute nodes in the cluster. The server fetches a mini-batch of a dataset located at a remote storage service into the candidate cache locations. The server collects information about time periods of completing a job on the one or more nodes, where the job is executed against fetched mini-batch at the candidate cache locations and the mini-batch at the remote storage location. The server selects, from the candidate cache locations and the remote storage location, a cache location. The server fetches the data of the dataset from the remote storage service to the cache location, and the one or more nodes execute the job against fetched data of the dataset at the cache location.
    Type: Application
    Filed: July 14, 2021
    Publication date: January 19, 2023
    Inventors: Srikumar Venugopal, Archit Patke, Ioannis Gkoufas, Christian Pinto, Panagiotis Koutsovasilis
  • Patent number: 11544290
    Abstract: Embodiments for providing intelligent data replication and distribution in a computing environment. Data access patterns of one or more queries issued to a plurality of data partitions may be forecasted. Data may be dynamically distributed and replicated to one or more existing data partitions or additional of the plurality of data partitions according to the forecasting.
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: January 3, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Stefano Braghin, Srikumar Venugopal
  • Publication number: 20220206999
    Abstract: A computer implemented method for optimizing performance of workflow includes associating each of a plurality of workflow nodes in a workflow with a data cache and managing the data cache on a local storage device on one of one or more compute nodes. A scheduler can request execution of the tasks of a given one of the plurality of workflow nodes on one of the one of more compute nodes that hosts the data cache associated with the given one of the plurality of workflow nodes. Each of the plurality of workflow nodes is permitted to access a distributed filesystem that is visible to each of the plurality of compute nodes. The data cache stores data produced by the tasks of the given one of the plurality of workflow nodes.
    Type: Application
    Filed: December 29, 2020
    Publication date: June 30, 2022
    Inventors: Vasileios Vasileiadis, Christian Pinto, Michael Johnston, Ioannis Gkoufas, Srikumar Venugopal
  • Publication number: 20220206872
    Abstract: A computer-implemented method of providing data transformation includes installing one or more data transformation plugins in a dataset made accessible for processing an end user's workload. A dataset-specific policy for the accessible dataset is ingested. A data transformation of the accessible dataset is executed by invoking one or more of the data transformation plugins to the accessible dataset based on the dataset-specific policy to generate a transformed dataset. The user's workload is deployed to provide data access for processing using the transformed dataset in accordance with a data governance policy.
    Type: Application
    Filed: December 30, 2020
    Publication date: June 30, 2022
    Inventors: Ioannis Gkoufas, Christian Pinto, Srikumar Venugopal, Stefano Braghin
  • Patent number: 11093358
    Abstract: Embodiments for managing distributed computing systems are provided. Information associated with operation of a computing node within a distributed computing system is collected. A reliability score for the computing node is calculated based on the collected information. The calculating of the reliability score is performed utilizing the computing node. A remedial action associated with the operation of the computing node is caused to be performed based on the calculated reliability score.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: August 17, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Srikumar Venugopal, Christian Pinto
  • Publication number: 20210216572
    Abstract: Embodiments for providing intelligent data replication and distribution in a computing environment. Data access patterns of one or more queries issued to a plurality of data partitions may be forecasted. Data may be dynamically distributed and replicated to one or more existing data partitions or additional of the plurality of data partitions according to the forecasting.
    Type: Application
    Filed: January 13, 2020
    Publication date: July 15, 2021
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Stefano BRAGHIN, Srikumar VENUGOPAL
  • Publication number: 20210109830
    Abstract: Embodiments for managing distributed computing systems are provided. Information associated with operation of a computing node within a distributed computing system is collected. A reliability score for the computing node is calculated based on the collected information. The calculating of the reliability score is performed utilizing the computing node. A remedial action associated with the operation of the computing node is caused to be performed based on the calculated reliability score.
    Type: Application
    Filed: October 14, 2019
    Publication date: April 15, 2021
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Srikumar VENUGOPAL, Christian PINTO
  • Publication number: 20210065063
    Abstract: Embodiments for processing data with multiple machine learning models are provided. Input data is received. The input data is caused to be evaluated by a first machine learning model to generate a first inference result. The first inference result is compared to at least one quality of service (QoS) parameter. Based on the comparison of the first inference result to the at least one QoS parameter, the input data is caused to be evaluated by a second machine learning model to generate a second inference result.
    Type: Application
    Filed: August 26, 2019
    Publication date: March 4, 2021
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michele GAZZETTI, Srikumar VENUGOPAL, Christian PINTO
  • Patent number: 10318341
    Abstract: A job execution scheduling system and associated methods are provided for accommodating a request for additional computing resources to execute a job that is currently being executed or a request for computing resources to execute a new job. The job execution scheduling system may utilize a decision function to determine one or more currently executing jobs to select for resizing. Resizing a currently executing job may include de-allocating one or more computing resources from the currently executing job and allocating the de-allocated resources to the job for which the request was received. In this manner, the request for additional computing resources is accommodated, while at the same time, the one or more jobs from which computing resources were de-allocated continue to be executed using a reduced set of computing resources.
    Type: Grant
    Filed: August 18, 2016
    Date of Patent: June 11, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Pierre Lemarinier, Srikumar Venugopal
  • Patent number: 10216587
    Abstract: Embodiments for providing failure tolerance to containerized applications by one or more processors. A layered filesystem is initialized to maintain checkpoint information of stateful processes in separate and exclusive layers on individual containers. A most recent checkpoint layer is transferred from a main container exclusively to an additional node to maintain an additional, shadow container.
    Type: Grant
    Filed: October 21, 2016
    Date of Patent: February 26, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Khalid Hasanov, Pierre Lemarinier, Muhammad M. Rafique, Srikumar Venugopal
  • Publication number: 20180113770
    Abstract: Embodiments for providing failure tolerance to containerized applications by one or more processors. A layered filesystem is initialized to maintain checkpoint information of stateful processes in separate and exclusive layers on individual containers. A most recent checkpoint layer is transferred from a main container exclusively to an additional node to maintain an additional, shadow container.
    Type: Application
    Filed: October 21, 2016
    Publication date: April 26, 2018
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Khalid HASANOV, Pierre LEMARINIER, Muhammad M. RAFIQUE, Srikumar VENUGOPAL
  • Publication number: 20170220379
    Abstract: A job execution scheduling system and associated methods are provided for accommodating a request for additional computing resources to execute a job that is currently being executed or a request for computing resources to execute a new job. The job execution scheduling system may utilize a decision function to determine one or more currently executing jobs to select for resizing. Resizing a currently executing job may include de-allocating one or more computing resources from the currently executing job and allocating the de-allocated resources to the job for which the request was received. In this manner, the request for additional computing resources is accommodated, while at the same time, the one or more jobs from which computing resources were de-allocated continue to be executed using a reduced set of computing resources.
    Type: Application
    Filed: August 18, 2016
    Publication date: August 3, 2017
    Inventors: Pierre Lemarinier, Srikumar Venugopal
  • Patent number: 9448842
    Abstract: A job execution scheduling system and associated methods are provided for accommodating a request for additional computing resources to execute a job that is currently being executed or a request for computing resources to execute a new job. The job execution scheduling system may utilize a decision function to determine one or more currently executing jobs to select for resizing. Resizing a currently executing job may include de-allocating one or more computing resources from the currently executing job and allocating the de-allocated resources to the job for which the request was received. In this manner, the request for additional computing resources is accommodated, while at the same time, the one or more jobs from which computing resources were de-allocated continue to be executed using a reduced set of computing resources.
    Type: Grant
    Filed: January 29, 2016
    Date of Patent: September 20, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Pierre Lemarinier, Srikumar Venugopal
  • Patent number: 8230070
    Abstract: A system and method for providing grid computing on a network of computing nodes, which includes a configurable service container executable at the nodes, including message dispatching, communication, network membership and persistence modules, and adapted to host pluggable service modules. When executed at the nodes, at least one instance of the container includes a membership service module for maintaining network connectivity between the nodes, at least one instance of the container includes a scheduler service module configured to receive one or more tasks from a client and schedule the tasks on at least one of the nodes, and at least one instance of the container includes an executor service module for receiving one or more tasks from the scheduler service module, executing the tasks so received and returning at least one result to the scheduler service module.
    Type: Grant
    Filed: November 7, 2008
    Date of Patent: July 24, 2012
    Assignee: Manjrasoft Pty. Ltd.
    Inventors: Rajkumar Buyya, Srikumar Venugopal, Xingchen Chu, Krishna Nadiminti
  • Publication number: 20100281166
    Abstract: A software platform for providing grid computing on a network of computing nodes, comprising a configurable service container executable at the nodes, including message dispatching, communication, network membership and persistence modules, and adapted to host pluggable service modules. When executed at the nodes at least one instance of the container includes a membership service module for maintaining network connectivity between the nodes, at least one instance of the container includes a scheduler service module configured to receive one or more tasks from a client and schedule the tasks on at least one of the nodes, and at least one instance of the container includes an executor service module for receiving one or more tasks from the scheduler service module, executing the tasks so received and returning at least one result to the scheduler service module.
    Type: Application
    Filed: November 7, 2008
    Publication date: November 4, 2010
    Applicant: MANJRASOFT PTY LTD
    Inventors: Rajkumar Buyya, Srikumar Venugopal, Xingchen Chu, Krishna Nadiminti