Patents by Inventor Evan Custodio

Evan Custodio has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11137922
    Abstract: Technologies for providing accelerated functions as a service in a disaggregated architecture include a compute device that is to receive a request for an accelerated task. The task is associated with a kernel usable by an accelerator sled communicatively coupled to the compute device to execute the task. The compute device is further to determine, in response to the request and with a database indicative of kernels and associated accelerator sleds, an accelerator sled that includes an accelerator device configured with the kernel associated with the request. Additionally, the compute device is to assign the task to the determined accelerator sled for execution. Other embodiments are also described and claimed.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: October 5, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry Mitchel, Rahul Khanna, Slawomir Putyrski, Sujoy Sen, Paul Dormitzer
  • Patent number: 11119835
    Abstract: Technologies for providing efficient reprovisioning in an accelerator device include an accelerator sled. The accelerator sled includes a memory and an accelerator device coupled to the memory. The accelerator device is to configure itself with a first bit stream to establish a first kernel, execute the first kernel to produce output data, write the output data to the memory, configure itself with a second bit stream to establish a second kernel, and execute the second kernel with the output data in the memory used as input data to the second kernel. Other embodiments are also described and claimed.
    Type: Grant
    Filed: December 30, 2017
    Date of Patent: September 14, 2021
    Assignee: Intel Corporation
    Inventors: Evan Custodio, Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel
  • Publication number: 20210271403
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Application
    Filed: May 14, 2021
    Publication date: September 2, 2021
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan CUSTODIO, Rahul Khanna, Sujoy Sen
  • Patent number: 11082525
    Abstract: Technologies for managing telemetry and sensor data on an edge networking platform are disclosed. According to one embodiment disclosed herein, a device monitors telemetry data associated with multiple services provided in the edge networking platform. The device identifies, for each of the services and as a function of the associated telemetry data, one or more service telemetry patterns. The device generates a profile including the identified service telemetry patterns.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: August 3, 2021
    Assignee: Intel Corporation
    Inventors: Ramanathan Sethuraman, Timothy Verrall, Ned M. Smith, Thomas Willhalm, Brinda Ganesh, Francesc Guim Bernat, Karthik Kumar, Evan Custodio, Suraj Prabhakaran, Ignacio Astilleros Diez, Nilesh K. Jain, Ravi Iyer, Andrew J. Herdrich, Alexander Vul, Patrick G. Kutch, Kevin Bohan, Trevor Cooper
  • Patent number: 11029870
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: June 8, 2021
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan Custodio, Rahul Khanna, Sujoy Sen
  • Publication number: 20210141552
    Abstract: Technologies for providing accelerated functions as a service in a disaggregated architecture include a compute device that is to receive a request for an accelerated task. The task is associated with a kernel usable by an accelerator sled communicatively coupled to the compute device to execute the task. The compute device is further to determine, in response to the request and with a database indicative of kernels and associated accelerator sleds, an accelerator sled that includes an accelerator device configured with the kernel associated with the request. Additionally, the compute device is to assign the task to the determined accelerator sled for execution. Other embodiments are also described and claimed.
    Type: Application
    Filed: December 17, 2020
    Publication date: May 13, 2021
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry Mitchel, Rahul Khanna, Slawomir Putyrski, Sujoy Sen, Paul Dormitzer
  • Patent number: 10990309
    Abstract: A compute device to manage workflow to disaggregated computing resources is provided. The compute device comprises a compute engine receive a workload processing request, the workload processing request defined by at least one request parameter, determine at least one accelerator device capable of processing a workload in accordance with the at least one request parameter, transmit a workload to the at least one accelerator device, receive a work product produced by the at least one accelerator device from the workload, and provide the work product to an application.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: April 27, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry Mitchel, Slawomir Putyrski
  • Patent number: 10970246
    Abstract: Technologies for network interface controllers (NICs) include a computing device having a NIC coupled to a root FPGA via an I/O link. The root FPGA is further coupled to multiple worker FPGAs by a serial link with each worker FPGA. The NIC may receive a remote direct memory access (RDMA) message from a remote host and send the RDMA message to the root FPGA via the I/O link. The root FPGA determines a target FPGA based on a memory address of the RDMA message. Each FPGA is associated with a part of a unified address space. If the target FPGA is a worker FPGA, the root FPGA sends the RDMA message to the worker FPGA via the corresponding serial link, and the worker FPGA processes the RDMA message. If the root FPGA is the target, the root FPGA may process the RDMA message. Other embodiments are described and claimed.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: April 6, 2021
    Assignee: Intel Corporation
    Inventors: Paul H. Dormitzer, Susanne M. Balle, Sujoy Sen, Evan Custodio
  • Patent number: 10963176
    Abstract: Technologies for offloading acceleration task scheduling operations to accelerator sleds include a compute device to receive a request from a compute sled to accelerate the execution of a job, which includes a set of tasks. The compute device is also to analyze the request to generate metadata indicative of the tasks within the job, a type of acceleration associated with each task, and a data dependency between the tasks. Additionally the compute device is to send an availability request, including the metadata, to one or more micro-orchestrators of one or more accelerator sleds communicatively coupled to the compute device. The compute device is further to receive availability data from the one or more micro-orchestrators, indicative of which of the tasks the micro-orchestrator has accepted for acceleration on the associated accelerator sled. Additionally, the compute device is to assign the tasks to the one or more micro-orchestrators as a function of the availability data.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: March 30, 2021
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Rahul Khanna, Evan Custodio
  • Patent number: 10949362
    Abstract: Technologies for facilitating remote memory requests in accelerator devices are disclosed. The accelerator device includes circuitry to receive, from a kernel of the present accelerator device, a request through an application programming interface exposed to a high level software language in which the kernel of the present accelerator device is implemented, to establish a logical communication path between the kernel of the present accelerator device and a target accelerator device kernel, based on one or more physical communication paths. The communication protocol supported by the accelerator device may allow kernels operating on the accelerator device to send memory requests for memory locations at remote devices, with the communication protocol performing all of the operations necessary to carry out the memory request.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: March 16, 2021
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Evan Custodio, Paul H. Dormitzer, Narayan Ranganathan
  • Publication number: 20210073161
    Abstract: Technologies for providing I/O channel abstraction for accelerator device kernels include an accelerator device comprising circuitry to obtain availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also configured to determine whether to establish a logical communication path between a kernel of the present accelerator device and another accelerator device kernel and establish, in response to a determination to establish the logical communication path as a function of the obtained availability data, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel.
    Type: Application
    Filed: November 3, 2020
    Publication date: March 11, 2021
    Inventors: Susanne M. BALLE, Evan CUSTODIO, Francesc GUIM BERNAT, Sujoy SEN, Slawomir PUTYRSKI, Paul DORMITZER, Joseph GRECCO
  • Publication number: 20210067162
    Abstract: A device includes a programmable logic fabric. The programmable logic fabric includes a first area, wherein a first persona is configured to be programmed in the first area. The programmable logic fabric also includes a second area, wherein a second persona is configured to be programmed in the second area in a second persona programming time. The device is configured to be controlled by a host to switch from running the first persona to running the second persona in a time less than the second persona programming time.
    Type: Application
    Filed: June 12, 2020
    Publication date: March 4, 2021
    Inventors: David Alexander Munday, Randall Carl Bilbrey, JR., Evan Custodio
  • Publication number: 20200409772
    Abstract: Technologies for providing inter-kernel communication application programming interfaces (API) include an orchestrator device comprising circuitry to receive a request to allocate one or more accelerator resources to a given workload. The circuitry is also configured to identify one or more kernel bit streams in the accelerator resources used to perform the workload. The circuitry is configured to determine, from the identified one or more kernel bit streams, an inter-kernel communication topology and configure the identified one or more kernel bit streams according to the inter-kernel communication topology.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Evan Custodio
  • Publication number: 20200409748
    Abstract: Technologies for managing accelerator resources in a computing environment include an orchestrator having circuitry. According to one embodiment, the circuitry is to monitor resource usage of an accelerator kernel configured on a source accelerator device. The circuitry is to determine whether the resource usage exceeds a threshold specified in one or more policies. Upon a determination that the resource usage exceeds the threshold, the circuitry is to identify a target accelerator device to which to migrate the accelerator kernel. The circuitry migrates the accelerator kernel from the source accelerator device to the target accelerator device.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Francesc Guim Bernat, Susanne M. Balle, Slawomir Putyrski, Sujoy Sen, Evan Custodio, Paul H. Dormitzer
  • Publication number: 20200409877
    Abstract: Technologies for facilitating remote memory requests in accelerator devices are disclosed. The accelerator device includes circuitry to receive, from a kernel of the present accelerator device, a request through an application programming interface exposed to a high level software language in which the kernel of the present accelerator device is implemented, to establish a logical communication path between the kernel of the present accelerator device and a target accelerator device kernel, based on one or more physical communication paths. The communication protocol supported by the accelerator device may allow kernels operating on the accelerator device to send memory requests for memory locations at remote devices, with the communication protocol performing all of the operations necessary to carry out the memory request.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Susanne M. Balle, Evan Custodio, Paul H. Dormitzer, Narayan Ranganathan
  • Patent number: 10877817
    Abstract: Technologies for providing inter-kernel communication application programming interfaces (API) include an orchestrator device comprising circuitry to receive a request to allocate one or more accelerator resources to a given workload. The circuitry is also configured to identify one or more kernel bit streams in the accelerator resources used to perform the workload. The circuitry is configured to determine, from the identified one or more kernel bit streams, an inter-kernel communication topology and configure the identified one or more kernel bit streams according to the inter-kernel communication topology.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: December 29, 2020
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Evan Custodio
  • Patent number: 10853296
    Abstract: Technologies for providing I/O channel abstraction for accelerator device kernels include an accelerator device comprising circuitry to obtain availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also configured to determine whether to establish a logical communication path between a kernel of the present accelerator device and another accelerator device kernel and establish, in response to a determination to establish the logical communication path as a function of the obtained availability data, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: December 1, 2020
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Evan Custodio, Francesc Guim Bernat, Sujoy Sen, Slawomir Putyrski, Paul Dormitzer, Joseph Grecco
  • Publication number: 20200356294
    Abstract: Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
    Type: Application
    Filed: July 30, 2020
    Publication date: November 12, 2020
    Inventors: Henry Mitchel, Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer
  • Publication number: 20200348854
    Abstract: Technologies for compressing communications for accelerator devices are disclosed. An accelerator device may include a communication abstraction logic units to manage communication with one or more remote accelerator devices. The communication abstraction logic unit may receive communication to and from a kernel on the accelerator device. The communication abstraction logic unit may compress and decompress the communication without instruction from the corresponding kernel. The communication abstraction logic unit may choose when and how to compress communications based on telemetry of the accelerator device and the remote accelerator device.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 5, 2020
    Inventors: Susanne M. Balle, Evan Custodio, Francesc Guim Bernat
  • Publication number: 20200341824
    Abstract: Technologies for providing inter-kernel communication abstraction to support scale-up and scale-out include an accelerator device. The accelerator device includes circuitry to receive, from a kernel of the present accelerator device, a request through an application programming interface exposed to a high level software language in which the kernel of the present accelerator device is implemented, to establish a logical communication path between the kernel of the present accelerator device and a target accelerator device kernel, based on one or more physical communication paths. Additionally, the circuitry is to establish, in response to the request, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel and communicate data between the kernel of the present accelerator device and the other accelerator device kernel with a unified communication protocol that manages differences between the physical communication paths.
    Type: Application
    Filed: April 26, 2019
    Publication date: October 29, 2020
    Inventors: Susanne M. Balle, Evan Custodio, Narayan Ranganathan, Paul H. Dormitzer