Patents by Inventor Christian Pinto
Christian Pinto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230306481Abstract: In an approach for storage, search, acquisition, and composition of a digital artifact, a processor obtains the digital artifact in a digital marketplace platform. The digital artifact is a collection of digital data with automatically generated and verifiable provenance and usage data. A processor transforms the digital artifact to define an access privilege. A processor shares the digital artifact in the digital marketplace platform by providing a view of a catalogue including the digital artifact. A processor authorizes a usage request based on the access privilege. A processor rewards a source of the digital artifact based on the usage of the digital artifact.Type: ApplicationFiled: March 24, 2022Publication date: September 28, 2023Inventors: Vasileios Vasileiadis, Srikumar Venugopal, Stefano Braghin, Christian Pinto, Michael Johnston, Yiannis Gkoufas
-
Patent number: 11755543Abstract: A computer implemented method for optimizing performance of workflow includes associating each of a plurality of workflow nodes in a workflow with a data cache and managing the data cache on a local storage device on one of one or more compute nodes. A scheduler can request execution of the tasks of a given one of the plurality of workflow nodes on one of the one of more compute nodes that hosts the data cache associated with the given one of the plurality of workflow nodes. Each of the plurality of workflow nodes is permitted to access a distributed filesystem that is visible to each of the plurality of compute nodes. The data cache stores data produced by the tasks of the given one of the plurality of workflow nodes.Type: GrantFiled: December 29, 2020Date of Patent: September 12, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Vasileios Vasileiadis, Christian Pinto, Michael Johnston, Ioannis Gkoufas, Srikumar Venugopal
-
Patent number: 11733902Abstract: Local memory and disaggregated memory may be identified and monitored for integrating disaggregated memory in a computing system. Candidate data may be migrated between the local memory and disaggregated memory to optimize allocation of disaggregated memory and migrated data according to a dynamic set of migration criteria.Type: GrantFiled: April 30, 2021Date of Patent: August 22, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Panagiotis Koutsovasilis, Michele Gazzetti, Christian Pinto
-
Patent number: 11662934Abstract: A data processing system includes a system fabric, a system memory, a memory controller, and a link controller communicatively coupled to the system fabric and configured to be communicatively coupled, via a communication link to a destination host with which the source host is non-coherent. A plurality of processing units is configured to execute a logical partition and to migrate the logical partition to the destination host via the communication link. Migration of the logical partition includes migrating, via a communication link, the dataset of the logical partition executing on the source host from the system memory of the source host to a system memory of the destination host. After migrating at least a portion of the dataset, a state of the logical partition is migrated, via the communication link, from the source host to the destination host, such that the logical partition thereafter executes on the destination host.Type: GrantFiled: December 15, 2020Date of Patent: May 30, 2023Assignee: International Business Machines CorporationInventors: Steven Leonard Roberts, David A. Larson Stanton, Peter J. Heyrman, Stuart Zachary Jacobs, Christian Pinto
-
Publication number: 20230077733Abstract: A request may be identified having one or more constraints for accessing disaggregated resources in a computing environment. One or more resources in a plurality of disaggregated resources may be identified based on the request. Computing server instances may be dynamically orchestrated using the one or more resources in the plurality of disaggregated resources based on the one or more constraints.Type: ApplicationFiled: September 16, 2021Publication date: March 16, 2023Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michele GAZZETTI, Panagiotis KOUTSOVASILIS, Christian PINTO
-
Patent number: 11605028Abstract: Embodiments for processing data with multiple machine learning models are provided. Input data is received. The input data is caused to be evaluated by a first machine learning model to generate a first inference result. The first inference result is compared to at least one quality of service (QoS) parameter. Based on the comparison of the first inference result to the at least one QoS parameter, the input data is caused to be evaluated by a second machine learning model to generate a second inference result.Type: GrantFiled: August 26, 2019Date of Patent: March 14, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michele Gazzetti, Srikumar Venugopal, Christian Pinto
-
Patent number: 11588750Abstract: A request may be identified having one or more constraints for accessing disaggregated resources in a computing environment. One or more resources in a plurality of disaggregated resources may be identified based on the request. Computing server instances may be dynamically orchestrated using the one or more resources in the plurality of disaggregated resources based on the one or more constraints.Type: GrantFiled: September 16, 2021Date of Patent: February 21, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michele Gazzetti, Panagiotis Koutsovasilis, Christian Pinto
-
Publication number: 20230014344Abstract: A computer-implemented method, a computer program product, and a computer system for determining optimal data access for deep learning applications on a cluster. A server determines candidate cache locations for one or more compute nodes in the cluster. The server fetches a mini-batch of a dataset located at a remote storage service into the candidate cache locations. The server collects information about time periods of completing a job on the one or more nodes, where the job is executed against fetched mini-batch at the candidate cache locations and the mini-batch at the remote storage location. The server selects, from the candidate cache locations and the remote storage location, a cache location. The server fetches the data of the dataset from the remote storage service to the cache location, and the one or more nodes execute the job against fetched data of the dataset at the cache location.Type: ApplicationFiled: July 14, 2021Publication date: January 19, 2023Inventors: Srikumar Venugopal, Archit Patke, Ioannis Gkoufas, Christian Pinto, Panagiotis Koutsovasilis
-
Publication number: 20220350518Abstract: Local memory and disaggregated memory may be identified and monitored for integrating disaggregated memory in a computing system. Candidate data may be migrated between the local memory and disaggregated memory to optimize allocation of disaggregated memory and migrated data according to a dynamic set of migration criteria.Type: ApplicationFiled: April 30, 2021Publication date: November 3, 2022Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Panagiotis Koutsovasilis, Michele Gazzetti, Christian Pinto
-
Publication number: 20220206999Abstract: A computer implemented method for optimizing performance of workflow includes associating each of a plurality of workflow nodes in a workflow with a data cache and managing the data cache on a local storage device on one of one or more compute nodes. A scheduler can request execution of the tasks of a given one of the plurality of workflow nodes on one of the one of more compute nodes that hosts the data cache associated with the given one of the plurality of workflow nodes. Each of the plurality of workflow nodes is permitted to access a distributed filesystem that is visible to each of the plurality of compute nodes. The data cache stores data produced by the tasks of the given one of the plurality of workflow nodes.Type: ApplicationFiled: December 29, 2020Publication date: June 30, 2022Inventors: Vasileios Vasileiadis, Christian Pinto, Michael Johnston, Ioannis Gkoufas, Srikumar Venugopal
-
Publication number: 20220206872Abstract: A computer-implemented method of providing data transformation includes installing one or more data transformation plugins in a dataset made accessible for processing an end user's workload. A dataset-specific policy for the accessible dataset is ingested. A data transformation of the accessible dataset is executed by invoking one or more of the data transformation plugins to the accessible dataset based on the dataset-specific policy to generate a transformed dataset. The user's workload is deployed to provide data access for processing using the transformed dataset in accordance with a data governance policy.Type: ApplicationFiled: December 30, 2020Publication date: June 30, 2022Inventors: Ioannis Gkoufas, Christian Pinto, Srikumar Venugopal, Stefano Braghin
-
Publication number: 20220188007Abstract: A data processing system includes a system fabric, a system memory, a memory controller, and a link controller communicatively coupled to the system fabric and configured to be communicatively coupled, via a communication link to a destination host with which the source host is non-coherent. A plurality of processing units is configured to execute a logical partition and to migrate the logical partition to the destination host via the communication link. Migration of the logical partition includes migrating, via a communication link, the dataset of the logical partition executing on the source host from the system memory of the source host to a system memory of the destination host. After migrating at least a portion of the dataset, a state of the logical partition is migrated, via the communication link, from the source host to the destination host, such that the logical partition thereafter executes on the destination host.Type: ApplicationFiled: December 15, 2020Publication date: June 16, 2022Inventors: Steven Leonard Roberts, David A. Larson Stanton, Peter J. Heyrman, Stuart Zachary Jacobs, Christian Pinto
-
Publication number: 20220164471Abstract: Disclosed are techniques for linking information about individual entities across multiple datasets. A target dataset with some information corresponding to at least one attribute of an entity is received. Semantic processing is performed on the target dataset to extract semantic representations of the information and corresponding attributes, which is utilized to search at least one other dataset for additional information that is absent from the target dataset, corresponding to at least one attribute of the entity, which are used to augment the target dataset with additional information corresponding to the entity. This is repeated iteratively, with each subsequent iteration including semantic representations of information found in the searches of previous iterations until no additional information about the entity is found when searching the multiple datasets with semantic representations of the now augmented target dataset.Type: ApplicationFiled: November 23, 2020Publication date: May 26, 2022Inventors: Stefano Braghin, Killian Levacher, Christian Pinto, Marco Simioni
-
Patent number: 11093358Abstract: Embodiments for managing distributed computing systems are provided. Information associated with operation of a computing node within a distributed computing system is collected. A reliability score for the computing node is calculated based on the collected information. The calculating of the reliability score is performed utilizing the computing node. A remedial action associated with the operation of the computing node is caused to be performed based on the calculated reliability score.Type: GrantFiled: October 14, 2019Date of Patent: August 17, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Srikumar Venugopal, Christian Pinto
-
Publication number: 20210109830Abstract: Embodiments for managing distributed computing systems are provided. Information associated with operation of a computing node within a distributed computing system is collected. A reliability score for the computing node is calculated based on the collected information. The calculating of the reliability score is performed utilizing the computing node. A remedial action associated with the operation of the computing node is caused to be performed based on the calculated reliability score.Type: ApplicationFiled: October 14, 2019Publication date: April 15, 2021Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Srikumar VENUGOPAL, Christian PINTO
-
Publication number: 20210065063Abstract: Embodiments for processing data with multiple machine learning models are provided. Input data is received. The input data is caused to be evaluated by a first machine learning model to generate a first inference result. The first inference result is compared to at least one quality of service (QoS) parameter. Based on the comparison of the first inference result to the at least one QoS parameter, the input data is caused to be evaluated by a second machine learning model to generate a second inference result.Type: ApplicationFiled: August 26, 2019Publication date: March 4, 2021Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michele GAZZETTI, Srikumar VENUGOPAL, Christian PINTO
-
Patent number: 10893120Abstract: Embodiments for data caching and data-aware placement for machine learning by a processor. Data may be cached in a distributed data store to one or more local compute nodes of cluster of nodes with the cached data. A new job may be scheduled, according to cache and data locality awareness, on the one or more local compute nodes with the cached data needed for execution.Type: GrantFiled: September 19, 2018Date of Patent: January 12, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Seetharami R. Seelam, Andrea Reale, Christian Pinto, Yiannis Gkoufas, Kostas Katrinis, Steven N. Eliuk
-
Publication number: 20200092392Abstract: Embodiments for data caching and data-aware placement for machine learning by a processor. Data may be cached in a distributed data store to one or more local compute nodes of cluster of nodes with the cached data. A new job may be scheduled, according to cache and data locality awareness, on the one or more local compute nodes with the cached data needed for execution.Type: ApplicationFiled: September 19, 2018Publication date: March 19, 2020Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Seetharami R. SEELAM, Andrea REALE, Christian PINTO, Yiannis GKOUFAS, Kostas KATRINIS, Steven N. ELIUK
-
Patent number: 10275288Abstract: The invention concerns a processing system comprising: a compute node (20) having one or more processors and one or more memory devices storing software enabling virtual computing resources and virtual memory to be assigned to support a plurality of virtual machines (VM1); a reconfigurable circuit (301) comprising a dynamically reconfigurable portion (302) comprising one or more partitions (304) that are reconfigurable during runtime and implement at least one hardware accelerator (ACC #1 to #N) assigned to at least one of the plurality of virtual machines (VM); and a virtualization manager (306) providing an interface between the at least one hardware accelerator (ACC #1 to #N) and the compute node (202) and comprising a circuit (406) adapted to translate, for the at least one hardware accelerator, virtual memory addresses into corresponding physical memory addresses to permit communication between the one or more hardware accelerators and the plurality of virtual machines.Type: GrantFiled: April 29, 2016Date of Patent: April 30, 2019Assignee: Virtual Open SystemsInventors: Christian Pinto, Michele Paolino, Salvatore Daniele Raho
-
Publication number: 20160321113Abstract: The invention concerns a processing system comprising: a compute node (20) having one or more processors and one or more memory devices storing software enabling virtual computing resources and virtual memory to be assigned to support a plurality of virtual machines (VM1); a reconfigurable circuit (301) comprising a dynamically reconfigurable portion (302) comprising one or more partitions (304) that are reconfigurable during runtime and implement at least one hardware accelerator (ACC #1 to #N) assigned to at least one of the plurality of virtual machines (VM); and a virtualization manager (306) providing an interface between the at least one hardware accelerator (ACC #1 to #N) and the compute node (202) and comprising a circuit (406) adapted to translate, for the at least one hardware accelerator, virtual memory addresses into corresponding physical memory addresses to permit communication between the one or more hardware accelerators and the plurality of virtual machines.Type: ApplicationFiled: April 29, 2016Publication date: November 3, 2016Inventors: Christian Pinto, Michele Paolino, Salvatore Daniele Raho