Patents by Inventor Christian Pinto

Christian Pinto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230306481
    Abstract: In an approach for storage, search, acquisition, and composition of a digital artifact, a processor obtains the digital artifact in a digital marketplace platform. The digital artifact is a collection of digital data with automatically generated and verifiable provenance and usage data. A processor transforms the digital artifact to define an access privilege. A processor shares the digital artifact in the digital marketplace platform by providing a view of a catalogue including the digital artifact. A processor authorizes a usage request based on the access privilege. A processor rewards a source of the digital artifact based on the usage of the digital artifact.
    Type: Application
    Filed: March 24, 2022
    Publication date: September 28, 2023
    Inventors: Vasileios Vasileiadis, Srikumar Venugopal, Stefano Braghin, Christian Pinto, Michael Johnston, Yiannis Gkoufas
  • Patent number: 11755543
    Abstract: A computer implemented method for optimizing performance of workflow includes associating each of a plurality of workflow nodes in a workflow with a data cache and managing the data cache on a local storage device on one of one or more compute nodes. A scheduler can request execution of the tasks of a given one of the plurality of workflow nodes on one of the one of more compute nodes that hosts the data cache associated with the given one of the plurality of workflow nodes. Each of the plurality of workflow nodes is permitted to access a distributed filesystem that is visible to each of the plurality of compute nodes. The data cache stores data produced by the tasks of the given one of the plurality of workflow nodes.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: September 12, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Vasileios Vasileiadis, Christian Pinto, Michael Johnston, Ioannis Gkoufas, Srikumar Venugopal
  • Patent number: 11733902
    Abstract: Local memory and disaggregated memory may be identified and monitored for integrating disaggregated memory in a computing system. Candidate data may be migrated between the local memory and disaggregated memory to optimize allocation of disaggregated memory and migrated data according to a dynamic set of migration criteria.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: August 22, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Panagiotis Koutsovasilis, Michele Gazzetti, Christian Pinto
  • Patent number: 11662934
    Abstract: A data processing system includes a system fabric, a system memory, a memory controller, and a link controller communicatively coupled to the system fabric and configured to be communicatively coupled, via a communication link to a destination host with which the source host is non-coherent. A plurality of processing units is configured to execute a logical partition and to migrate the logical partition to the destination host via the communication link. Migration of the logical partition includes migrating, via a communication link, the dataset of the logical partition executing on the source host from the system memory of the source host to a system memory of the destination host. After migrating at least a portion of the dataset, a state of the logical partition is migrated, via the communication link, from the source host to the destination host, such that the logical partition thereafter executes on the destination host.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: May 30, 2023
    Assignee: International Business Machines Corporation
    Inventors: Steven Leonard Roberts, David A. Larson Stanton, Peter J. Heyrman, Stuart Zachary Jacobs, Christian Pinto
  • Publication number: 20230077733
    Abstract: A request may be identified having one or more constraints for accessing disaggregated resources in a computing environment. One or more resources in a plurality of disaggregated resources may be identified based on the request. Computing server instances may be dynamically orchestrated using the one or more resources in the plurality of disaggregated resources based on the one or more constraints.
    Type: Application
    Filed: September 16, 2021
    Publication date: March 16, 2023
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michele GAZZETTI, Panagiotis KOUTSOVASILIS, Christian PINTO
  • Patent number: 11605028
    Abstract: Embodiments for processing data with multiple machine learning models are provided. Input data is received. The input data is caused to be evaluated by a first machine learning model to generate a first inference result. The first inference result is compared to at least one quality of service (QoS) parameter. Based on the comparison of the first inference result to the at least one QoS parameter, the input data is caused to be evaluated by a second machine learning model to generate a second inference result.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: March 14, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michele Gazzetti, Srikumar Venugopal, Christian Pinto
  • Patent number: 11588750
    Abstract: A request may be identified having one or more constraints for accessing disaggregated resources in a computing environment. One or more resources in a plurality of disaggregated resources may be identified based on the request. Computing server instances may be dynamically orchestrated using the one or more resources in the plurality of disaggregated resources based on the one or more constraints.
    Type: Grant
    Filed: September 16, 2021
    Date of Patent: February 21, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michele Gazzetti, Panagiotis Koutsovasilis, Christian Pinto
  • Publication number: 20230014344
    Abstract: A computer-implemented method, a computer program product, and a computer system for determining optimal data access for deep learning applications on a cluster. A server determines candidate cache locations for one or more compute nodes in the cluster. The server fetches a mini-batch of a dataset located at a remote storage service into the candidate cache locations. The server collects information about time periods of completing a job on the one or more nodes, where the job is executed against fetched mini-batch at the candidate cache locations and the mini-batch at the remote storage location. The server selects, from the candidate cache locations and the remote storage location, a cache location. The server fetches the data of the dataset from the remote storage service to the cache location, and the one or more nodes execute the job against fetched data of the dataset at the cache location.
    Type: Application
    Filed: July 14, 2021
    Publication date: January 19, 2023
    Inventors: Srikumar Venugopal, Archit Patke, Ioannis Gkoufas, Christian Pinto, Panagiotis Koutsovasilis
  • Publication number: 20220350518
    Abstract: Local memory and disaggregated memory may be identified and monitored for integrating disaggregated memory in a computing system. Candidate data may be migrated between the local memory and disaggregated memory to optimize allocation of disaggregated memory and migrated data according to a dynamic set of migration criteria.
    Type: Application
    Filed: April 30, 2021
    Publication date: November 3, 2022
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Panagiotis Koutsovasilis, Michele Gazzetti, Christian Pinto
  • Publication number: 20220206999
    Abstract: A computer implemented method for optimizing performance of workflow includes associating each of a plurality of workflow nodes in a workflow with a data cache and managing the data cache on a local storage device on one of one or more compute nodes. A scheduler can request execution of the tasks of a given one of the plurality of workflow nodes on one of the one of more compute nodes that hosts the data cache associated with the given one of the plurality of workflow nodes. Each of the plurality of workflow nodes is permitted to access a distributed filesystem that is visible to each of the plurality of compute nodes. The data cache stores data produced by the tasks of the given one of the plurality of workflow nodes.
    Type: Application
    Filed: December 29, 2020
    Publication date: June 30, 2022
    Inventors: Vasileios Vasileiadis, Christian Pinto, Michael Johnston, Ioannis Gkoufas, Srikumar Venugopal
  • Publication number: 20220206872
    Abstract: A computer-implemented method of providing data transformation includes installing one or more data transformation plugins in a dataset made accessible for processing an end user's workload. A dataset-specific policy for the accessible dataset is ingested. A data transformation of the accessible dataset is executed by invoking one or more of the data transformation plugins to the accessible dataset based on the dataset-specific policy to generate a transformed dataset. The user's workload is deployed to provide data access for processing using the transformed dataset in accordance with a data governance policy.
    Type: Application
    Filed: December 30, 2020
    Publication date: June 30, 2022
    Inventors: Ioannis Gkoufas, Christian Pinto, Srikumar Venugopal, Stefano Braghin
  • Publication number: 20220188007
    Abstract: A data processing system includes a system fabric, a system memory, a memory controller, and a link controller communicatively coupled to the system fabric and configured to be communicatively coupled, via a communication link to a destination host with which the source host is non-coherent. A plurality of processing units is configured to execute a logical partition and to migrate the logical partition to the destination host via the communication link. Migration of the logical partition includes migrating, via a communication link, the dataset of the logical partition executing on the source host from the system memory of the source host to a system memory of the destination host. After migrating at least a portion of the dataset, a state of the logical partition is migrated, via the communication link, from the source host to the destination host, such that the logical partition thereafter executes on the destination host.
    Type: Application
    Filed: December 15, 2020
    Publication date: June 16, 2022
    Inventors: Steven Leonard Roberts, David A. Larson Stanton, Peter J. Heyrman, Stuart Zachary Jacobs, Christian Pinto
  • Publication number: 20220164471
    Abstract: Disclosed are techniques for linking information about individual entities across multiple datasets. A target dataset with some information corresponding to at least one attribute of an entity is received. Semantic processing is performed on the target dataset to extract semantic representations of the information and corresponding attributes, which is utilized to search at least one other dataset for additional information that is absent from the target dataset, corresponding to at least one attribute of the entity, which are used to augment the target dataset with additional information corresponding to the entity. This is repeated iteratively, with each subsequent iteration including semantic representations of information found in the searches of previous iterations until no additional information about the entity is found when searching the multiple datasets with semantic representations of the now augmented target dataset.
    Type: Application
    Filed: November 23, 2020
    Publication date: May 26, 2022
    Inventors: Stefano Braghin, Killian Levacher, Christian Pinto, Marco Simioni
  • Patent number: 11093358
    Abstract: Embodiments for managing distributed computing systems are provided. Information associated with operation of a computing node within a distributed computing system is collected. A reliability score for the computing node is calculated based on the collected information. The calculating of the reliability score is performed utilizing the computing node. A remedial action associated with the operation of the computing node is caused to be performed based on the calculated reliability score.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: August 17, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Srikumar Venugopal, Christian Pinto
  • Publication number: 20210109830
    Abstract: Embodiments for managing distributed computing systems are provided. Information associated with operation of a computing node within a distributed computing system is collected. A reliability score for the computing node is calculated based on the collected information. The calculating of the reliability score is performed utilizing the computing node. A remedial action associated with the operation of the computing node is caused to be performed based on the calculated reliability score.
    Type: Application
    Filed: October 14, 2019
    Publication date: April 15, 2021
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Srikumar VENUGOPAL, Christian PINTO
  • Publication number: 20210065063
    Abstract: Embodiments for processing data with multiple machine learning models are provided. Input data is received. The input data is caused to be evaluated by a first machine learning model to generate a first inference result. The first inference result is compared to at least one quality of service (QoS) parameter. Based on the comparison of the first inference result to the at least one QoS parameter, the input data is caused to be evaluated by a second machine learning model to generate a second inference result.
    Type: Application
    Filed: August 26, 2019
    Publication date: March 4, 2021
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michele GAZZETTI, Srikumar VENUGOPAL, Christian PINTO
  • Patent number: 10893120
    Abstract: Embodiments for data caching and data-aware placement for machine learning by a processor. Data may be cached in a distributed data store to one or more local compute nodes of cluster of nodes with the cached data. A new job may be scheduled, according to cache and data locality awareness, on the one or more local compute nodes with the cached data needed for execution.
    Type: Grant
    Filed: September 19, 2018
    Date of Patent: January 12, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Seetharami R. Seelam, Andrea Reale, Christian Pinto, Yiannis Gkoufas, Kostas Katrinis, Steven N. Eliuk
  • Publication number: 20200092392
    Abstract: Embodiments for data caching and data-aware placement for machine learning by a processor. Data may be cached in a distributed data store to one or more local compute nodes of cluster of nodes with the cached data. A new job may be scheduled, according to cache and data locality awareness, on the one or more local compute nodes with the cached data needed for execution.
    Type: Application
    Filed: September 19, 2018
    Publication date: March 19, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Seetharami R. SEELAM, Andrea REALE, Christian PINTO, Yiannis GKOUFAS, Kostas KATRINIS, Steven N. ELIUK
  • Patent number: 10275288
    Abstract: The invention concerns a processing system comprising: a compute node (20) having one or more processors and one or more memory devices storing software enabling virtual computing resources and virtual memory to be assigned to support a plurality of virtual machines (VM1); a reconfigurable circuit (301) comprising a dynamically reconfigurable portion (302) comprising one or more partitions (304) that are reconfigurable during runtime and implement at least one hardware accelerator (ACC #1 to #N) assigned to at least one of the plurality of virtual machines (VM); and a virtualization manager (306) providing an interface between the at least one hardware accelerator (ACC #1 to #N) and the compute node (202) and comprising a circuit (406) adapted to translate, for the at least one hardware accelerator, virtual memory addresses into corresponding physical memory addresses to permit communication between the one or more hardware accelerators and the plurality of virtual machines.
    Type: Grant
    Filed: April 29, 2016
    Date of Patent: April 30, 2019
    Assignee: Virtual Open Systems
    Inventors: Christian Pinto, Michele Paolino, Salvatore Daniele Raho
  • Publication number: 20160321113
    Abstract: The invention concerns a processing system comprising: a compute node (20) having one or more processors and one or more memory devices storing software enabling virtual computing resources and virtual memory to be assigned to support a plurality of virtual machines (VM1); a reconfigurable circuit (301) comprising a dynamically reconfigurable portion (302) comprising one or more partitions (304) that are reconfigurable during runtime and implement at least one hardware accelerator (ACC #1 to #N) assigned to at least one of the plurality of virtual machines (VM); and a virtualization manager (306) providing an interface between the at least one hardware accelerator (ACC #1 to #N) and the compute node (202) and comprising a circuit (406) adapted to translate, for the at least one hardware accelerator, virtual memory addresses into corresponding physical memory addresses to permit communication between the one or more hardware accelerators and the plurality of virtual machines.
    Type: Application
    Filed: April 29, 2016
    Publication date: November 3, 2016
    Inventors: Christian Pinto, Michele Paolino, Salvatore Daniele Raho