Patents by Inventor Slawomir PUTYRSKI

Slawomir PUTYRSKI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11451455
    Abstract: Technologies for latency based service level agreement (SLA) management in remote direct memory access (RDMA) networks include multiple compute devices in communication via a network switch. A compute device determines a service level objective (SLO) indicative of a guaranteed maximum latency for a percentage of RDMA requests of an RDMA session. The compute device receives latency data indicative of latency of an RDMA request from a host device. The compute device determines a priority associated with the RDMA request as a function of the SLO and the latency data. The compute device schedules the RDMA request based on the priority. The network switch may allocate queue resources to the RDMA request based on the priority, reclaim the queue resources after the RDMA request is scheduled, and then return the queue resources to a free pool. Other embodiments are described and claimed.
    Type: Grant
    Filed: August 14, 2019
    Date of Patent: September 20, 2022
    Assignee: Intel Corporation
    Inventors: Mrittika Ganguli, Arvind Srinivasan, Slawomir Putyrski, Donald E. Wood
  • Patent number: 11444866
    Abstract: Techniques for managing static and dynamic partitions in software-defined infrastructures (SDI) are described. An SDI manager component may include one or more processor circuits to access one or more resources. The SDI manager component may include a partition manager to create one or more partitions using the one or more resources, the one or more partitions each including a plurality of nodes of a similar resource type. The SDI manager may generate an update to a pre-composed partition table, stored within a non-transitory computer-readable storage medium, including the created one or more partitions, and receive a request from an orchestrator for a node. The SDI manager may select one of the created one or more partitions to the orchestrator based upon the pre-composed partition table, and identify the selected partition to the orchestrator. Other embodiments are described and claimed.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: September 13, 2022
    Assignee: INTEL CORPORATION
    Inventors: Daniel Rivas Barragan, Francesc Guim Bernat, Susanne M. Balle, John Chun Kwok Leung, Suraj Prabhakaran, Murugasamy K. Nachimuthu, Slawomir Putyrski
  • Patent number: 11429297
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Grant
    Filed: May 14, 2021
    Date of Patent: August 30, 2022
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan Custodio, Rahul Khanna, Sujoy Sen
  • Publication number: 20220222176
    Abstract: Examples described herein relate to circuitry to utilize a proportional, derivative, integral neural network (PIDNN) controller to adjust one or more parameters allocated to a first group of one or more workloads based on one or more target parameters for a second group of one or more workloads. In some examples, the second group of one or more workloads are a same, lower, or higher priority level than that of the first group of one or more workloads.
    Type: Application
    Filed: March 31, 2022
    Publication date: July 14, 2022
    Inventors: Anna DREWEK-OSSOWICKA, Kamil Tomasz ANDRZEJEWSKI, Rameshkumar G. ILLIKKAL, Andrew J. HERDRICH, Slawomir PUTYRSKI, Shruthi VENUGOPAL
  • Publication number: 20220179575
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Application
    Filed: February 25, 2022
    Publication date: June 9, 2022
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan CUSTODIO, Rahul Khanna, Sujoy Sen
  • Publication number: 20220138025
    Abstract: Technologies for providing efficient reprovisioning in an accelerator device include an accelerator sled. The accelerator sled includes a memory and an accelerator device coupled to the memory. The accelerator device is to configure itself with a first bit stream to establish a first kernel, execute the first kernel to produce output data, write the output data to the memory, configure itself with a second bit stream to establish a second kernel, and execute the second kernel with the output data in the memory used as input data to the second kernel. Other embodiments are also described and claimed.
    Type: Application
    Filed: September 10, 2021
    Publication date: May 5, 2022
    Inventors: Evan Custodio, Susanne M. Balle, Francesc GUIM BERNAT, Slawomir Putyrski, Joe Grecco, Henry Mitchel
  • Publication number: 20210365199
    Abstract: A compute device to manage workflow to disaggregated computing resources is provided. The compute device comprises a compute engine receive a workload processing request, the workload processing request defined by at least one request parameter, determine at least one accelerator device capable of processing a workload in accordance with the at least one request parameter, transmit a workload to the at least one accelerator device, receive a work product produced by the at least one accelerator device from the workload, and provide the work product to an application.
    Type: Application
    Filed: April 2, 2021
    Publication date: November 25, 2021
    Inventors: Francesc Guim Bernat, Evan CUSTODIO, Susanne M. Balle, Joe Grecco, Henry Mitchel, Slawomir Putyrski
  • Publication number: 20210334138
    Abstract: Technologies for pre-configuring accelerators by predicting bit-streams include communication circuitry and a compute device. The compute device includes a compute engine to determine one or more bit-streams registered on each accelerator of multiple accelerators. The compute engine is further to predict a next job to be requested for acceleration from an application of at least one compute sled of multiple compute sleds, predict a bit-stream from a bit-stream library that is to execute the predicted next job requested to be accelerated, and determine whether the predicted bit-stream is already registered on one of the accelerators. In response to a determination that the predicted bit-stream is not registered on one of the accelerators, the compute engine is to select an accelerator from the plurality of accelerators that satisfies characteristics of the predicted bit-stream and register the predicted bit-stream on the determined accelerator.
    Type: Application
    Filed: July 1, 2021
    Publication date: October 28, 2021
    Inventors: Francesc GUIM BERNAT, Susanne M. BALLE, Slawomir PUTYRSKI, Rahul KHANNA, Paul DORMITZER
  • Publication number: 20210318823
    Abstract: Technologies for offloading acceleration task scheduling operations to accelerator sleds include a compute device to receive a request from a compute sled to accelerate the execution of a job, which includes a set of tasks. The compute device is also to analyze the request to generate metadata indicative of the tasks within the job, a type of acceleration associated with each task, and a data dependency between the tasks. Additionally the compute device is to send an availability request, including the metadata, to one or more micro-orchestrators of one or more accelerator sleds communicatively coupled to the compute device. The compute device is further to receive availability data from the one or more micro-orchestrators, indicative of which of the tasks the micro-orchestrator has accepted for acceleration on the associated accelerator sled. Additionally, the compute device is to assign the tasks to the one or more micro-orchestrators as a function of the availability data.
    Type: Application
    Filed: March 26, 2021
    Publication date: October 14, 2021
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry MITCHEL, Rahul Khanna, Evan CUSTODIO
  • Patent number: 11137922
    Abstract: Technologies for providing accelerated functions as a service in a disaggregated architecture include a compute device that is to receive a request for an accelerated task. The task is associated with a kernel usable by an accelerator sled communicatively coupled to the compute device to execute the task. The compute device is further to determine, in response to the request and with a database indicative of kernels and associated accelerator sleds, an accelerator sled that includes an accelerator device configured with the kernel associated with the request. Additionally, the compute device is to assign the task to the determined accelerator sled for execution. Other embodiments are also described and claimed.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: October 5, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry Mitchel, Rahul Khanna, Slawomir Putyrski, Sujoy Sen, Paul Dormitzer
  • Patent number: 11128555
    Abstract: Techniques for migration for composite nodes in software-defined infrastructures (SDI) are described. A SDI system may include a SDI manager component, including one or more processor circuits, configured to access one or more remote resources, the SDI manager component may include a partition manager configured to receive a request to create a composite node from an orchestrator component, the request including at least one preferred compute sled type and at least one alternative compute sled type. The SDI manager may create a composite node using a first compute sled matching the at least one alternative compute sled type. The SDI manager may determine, based upon a migration table stored on a non-transitory computer-readable storage medium that a second compute sled matching the at least one preferred compute sled type is available. The SDI manager may perform an migration from the first compute sled to the second compute sled. Other embodiments are described and claimed.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: September 21, 2021
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Susanne M. Balle, Daniel Rivas Barragan, John Chun Kwok Leung, Mark S. Myers, Suraj Prabhakaran, Murugasamy K. Nachimuthu, Slawomir Putyrski
  • Patent number: 11119835
    Abstract: Technologies for providing efficient reprovisioning in an accelerator device include an accelerator sled. The accelerator sled includes a memory and an accelerator device coupled to the memory. The accelerator device is to configure itself with a first bit stream to establish a first kernel, execute the first kernel to produce output data, write the output data to the memory, configure itself with a second bit stream to establish a second kernel, and execute the second kernel with the output data in the memory used as input data to the second kernel. Other embodiments are also described and claimed.
    Type: Grant
    Filed: December 30, 2017
    Date of Patent: September 14, 2021
    Assignee: Intel Corporation
    Inventors: Evan Custodio, Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel
  • Patent number: 11115497
    Abstract: Technologies for providing advanced resource management in a disaggregated environment include a compute device. The compute device includes circuitry to obtain a workload to be executed by a set of resources in a disaggregated system, query a sled in the disaggregated system to identify an estimated time to complete execution of a portion of the workload to be accelerated using a kernel, and assign, in response to a determination that the estimated time to complete execution of the portion of the workload satisfies a target quality of service associated with the workload, the portion of the workload to the sled for acceleration.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: September 7, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Slawomir Putyrski, Susanne M. Balle, Thomas Willhalm, Karthik Kumar
  • Publication number: 20210271403
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Application
    Filed: May 14, 2021
    Publication date: September 2, 2021
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan CUSTODIO, Rahul Khanna, Sujoy Sen
  • Patent number: 11029870
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: June 8, 2021
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan Custodio, Rahul Khanna, Sujoy Sen
  • Publication number: 20210141731
    Abstract: Examples described herein relate to prefetching content from a remote memory device to a memory tier local to a higher level cache or memory. An application or device can indicate a time availability for data to be available in a higher level cache or memory. A prefetcher used by a network interface can allocate resources in any intermediary network device in a data path from the remote memory device to the memory tier local to the higher level cache. Memory access bandwidth, egress bandwidth, memory space in any intermediary network device can be allocated for prefetch of content. In some examples, proactive prefetch can occur for content expected to be prefetched but not requested to be prefetched.
    Type: Application
    Filed: December 17, 2020
    Publication date: May 13, 2021
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Slawomir Putyrski, Susanne M. Balle
  • Publication number: 20210141552
    Abstract: Technologies for providing accelerated functions as a service in a disaggregated architecture include a compute device that is to receive a request for an accelerated task. The task is associated with a kernel usable by an accelerator sled communicatively coupled to the compute device to execute the task. The compute device is further to determine, in response to the request and with a database indicative of kernels and associated accelerator sleds, an accelerator sled that includes an accelerator device configured with the kernel associated with the request. Additionally, the compute device is to assign the task to the determined accelerator sled for execution. Other embodiments are also described and claimed.
    Type: Application
    Filed: December 17, 2020
    Publication date: May 13, 2021
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry Mitchel, Rahul Khanna, Slawomir Putyrski, Sujoy Sen, Paul Dormitzer
  • Patent number: 10990309
    Abstract: A compute device to manage workflow to disaggregated computing resources is provided. The compute device comprises a compute engine receive a workload processing request, the workload processing request defined by at least one request parameter, determine at least one accelerator device capable of processing a workload in accordance with the at least one request parameter, transmit a workload to the at least one accelerator device, receive a work product produced by the at least one accelerator device from the workload, and provide the work product to an application.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: April 27, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry Mitchel, Slawomir Putyrski
  • Patent number: 10963176
    Abstract: Technologies for offloading acceleration task scheduling operations to accelerator sleds include a compute device to receive a request from a compute sled to accelerate the execution of a job, which includes a set of tasks. The compute device is also to analyze the request to generate metadata indicative of the tasks within the job, a type of acceleration associated with each task, and a data dependency between the tasks. Additionally the compute device is to send an availability request, including the metadata, to one or more micro-orchestrators of one or more accelerator sleds communicatively coupled to the compute device. The compute device is further to receive availability data from the one or more micro-orchestrators, indicative of which of the tasks the micro-orchestrator has accepted for acceleration on the associated accelerator sled. Additionally, the compute device is to assign the tasks to the one or more micro-orchestrators as a function of the availability data.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: March 30, 2021
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Rahul Khanna, Evan Custodio
  • Publication number: 20210073161
    Abstract: Technologies for providing I/O channel abstraction for accelerator device kernels include an accelerator device comprising circuitry to obtain availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also configured to determine whether to establish a logical communication path between a kernel of the present accelerator device and another accelerator device kernel and establish, in response to a determination to establish the logical communication path as a function of the obtained availability data, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel.
    Type: Application
    Filed: November 3, 2020
    Publication date: March 11, 2021
    Inventors: Susanne M. BALLE, Evan CUSTODIO, Francesc GUIM BERNAT, Sujoy SEN, Slawomir PUTYRSKI, Paul DORMITZER, Joseph GRECCO