Patents by Inventor Evan Custodio

Evan Custodio has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200409772
    Abstract: Technologies for providing inter-kernel communication application programming interfaces (API) include an orchestrator device comprising circuitry to receive a request to allocate one or more accelerator resources to a given workload. The circuitry is also configured to identify one or more kernel bit streams in the accelerator resources used to perform the workload. The circuitry is configured to determine, from the identified one or more kernel bit streams, an inter-kernel communication topology and configure the identified one or more kernel bit streams according to the inter-kernel communication topology.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Evan Custodio
  • Publication number: 20200409877
    Abstract: Technologies for facilitating remote memory requests in accelerator devices are disclosed. The accelerator device includes circuitry to receive, from a kernel of the present accelerator device, a request through an application programming interface exposed to a high level software language in which the kernel of the present accelerator device is implemented, to establish a logical communication path between the kernel of the present accelerator device and a target accelerator device kernel, based on one or more physical communication paths. The communication protocol supported by the accelerator device may allow kernels operating on the accelerator device to send memory requests for memory locations at remote devices, with the communication protocol performing all of the operations necessary to carry out the memory request.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Susanne M. Balle, Evan Custodio, Paul H. Dormitzer, Narayan Ranganathan
  • Publication number: 20200409748
    Abstract: Technologies for managing accelerator resources in a computing environment include an orchestrator having circuitry. According to one embodiment, the circuitry is to monitor resource usage of an accelerator kernel configured on a source accelerator device. The circuitry is to determine whether the resource usage exceeds a threshold specified in one or more policies. Upon a determination that the resource usage exceeds the threshold, the circuitry is to identify a target accelerator device to which to migrate the accelerator kernel. The circuitry migrates the accelerator kernel from the source accelerator device to the target accelerator device.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Francesc Guim Bernat, Susanne M. Balle, Slawomir Putyrski, Sujoy Sen, Evan Custodio, Paul H. Dormitzer
  • Patent number: 10877817
    Abstract: Technologies for providing inter-kernel communication application programming interfaces (API) include an orchestrator device comprising circuitry to receive a request to allocate one or more accelerator resources to a given workload. The circuitry is also configured to identify one or more kernel bit streams in the accelerator resources used to perform the workload. The circuitry is configured to determine, from the identified one or more kernel bit streams, an inter-kernel communication topology and configure the identified one or more kernel bit streams according to the inter-kernel communication topology.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: December 29, 2020
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Evan Custodio
  • Patent number: 10853296
    Abstract: Technologies for providing I/O channel abstraction for accelerator device kernels include an accelerator device comprising circuitry to obtain availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also configured to determine whether to establish a logical communication path between a kernel of the present accelerator device and another accelerator device kernel and establish, in response to a determination to establish the logical communication path as a function of the obtained availability data, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: December 1, 2020
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Evan Custodio, Francesc Guim Bernat, Sujoy Sen, Slawomir Putyrski, Paul Dormitzer, Joseph Grecco
  • Publication number: 20200356294
    Abstract: Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
    Type: Application
    Filed: July 30, 2020
    Publication date: November 12, 2020
    Inventors: Henry Mitchel, Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer
  • Publication number: 20200348854
    Abstract: Technologies for compressing communications for accelerator devices are disclosed. An accelerator device may include a communication abstraction logic units to manage communication with one or more remote accelerator devices. The communication abstraction logic unit may receive communication to and from a kernel on the accelerator device. The communication abstraction logic unit may compress and decompress the communication without instruction from the corresponding kernel. The communication abstraction logic unit may choose when and how to compress communications based on telemetry of the accelerator device and the remote accelerator device.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 5, 2020
    Inventors: Susanne M. Balle, Evan Custodio, Francesc Guim Bernat
  • Publication number: 20200341824
    Abstract: Technologies for providing inter-kernel communication abstraction to support scale-up and scale-out include an accelerator device. The accelerator device includes circuitry to receive, from a kernel of the present accelerator device, a request through an application programming interface exposed to a high level software language in which the kernel of the present accelerator device is implemented, to establish a logical communication path between the kernel of the present accelerator device and a target accelerator device kernel, based on one or more physical communication paths. Additionally, the circuitry is to establish, in response to the request, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel and communicate data between the kernel of the present accelerator device and the other accelerator device kernel with a unified communication protocol that manages differences between the physical communication paths.
    Type: Application
    Filed: April 26, 2019
    Publication date: October 29, 2020
    Inventors: Susanne M. Balle, Evan Custodio, Narayan Ranganathan, Paul H. Dormitzer
  • Patent number: 10789189
    Abstract: Technologies for providing inter-kernel flow control for accelerator device kernels includes an accelerator device. The accelerator device includes circuitry to determine availability data indicative of an availability of one or more accelerator device kernels in a system. The availability data includes credit data indicative of a number of data packets permitted to be sent from an output port associated with a kernel of the present accelerator device to an input port associated with another accelerator device kernel. The circuitry is also to obtain a data packet to be processed by a target accelerator device kernel in the system. Additionally, the circuitry is to determine, as a function of the credit data, an output port to send the data packet through to provide the data packet to the target accelerator device kernel. Additionally, the circuitry is to send the data packet through the determined output port.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: September 29, 2020
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Evan Custodio
  • Patent number: 10768842
    Abstract: Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: September 8, 2020
    Assignee: Intel Corporation
    Inventors: Henry Mitchel, Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer
  • Patent number: 10686449
    Abstract: A device includes a programmable logic fabric. The programmable logic fabric includes a first area, wherein a first persona is configured to be programmed in the first area. The programmable logic fabric also includes a second area, wherein a second persona is configured to be programmed in the second area in a second persona programming time. The device is configured to be controlled by a host to switch from running the first persona to running the second persona in a time less than the second persona programming time.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: June 16, 2020
    Assignee: Altera Corporation
    Inventors: David Alexander Munday, Randall Carl Bilbrey, Jr., Evan Custodio
  • Patent number: 10678737
    Abstract: Technologies for providing dynamic communication path modification for accelerator device kernels include an accelerator device comprising circuitry to obtain initial availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also to produce, as a function of the initial availability data, a connectivity matrix indicative of the physical communication paths and a logical communication path defined by one or more of the physical communication paths between a kernel of the present accelerator device and a target accelerator device kernel. Additionally, the circuitry is to obtain updated availability data indicative of a subsequent availability of each accelerator device kernel and update, as a function of the updated availability data, the connectivity matrix to modify the logical communication path.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: June 9, 2020
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Slawomir Putyrski, Joseph Grecco, Evan Custodio, Francesc Guim Bernat
  • Publication number: 20200167506
    Abstract: A PCIe card includes an FPGA and a memory that is discrete from the FPGA. The memory is accessible by the FPGA and not other devices on the card. The FPGA's core fabric is configured with a security processor that verifies a bitstream loaded through the FGPA into the memory as authentic or not authentic to limit unauthorized access to data from a user circuit that is associated with a not authentic bitstream. The security processor is loaded into the FPGA when a request is made for bitstream verification and is allowed to be overwritten after the security processor processes the bitstream to determine if the bitstream is authentication or not authentic. Allowing the security processor to be overwritten allows for high percentage usage of the core fabric for user circuits and limits the inclusion of a static circuit in the core fabric that is infrequently used.
    Type: Application
    Filed: September 27, 2019
    Publication date: May 28, 2020
    Applicant: Intel Corporation
    Inventors: Prakash Iyer, Eric Innis, Evan Custodio, Ting Lu
  • Patent number: 10606779
    Abstract: A programmable integrated circuit that can support partial reconfiguration is provided. The programmable integrated circuit may include multiple processing nodes that serve as accelerator blocks for an associated host processor that is communicating with the integrated circuit. The processing nodes may be connected in a hybrid shared-pipelined topology. Each pipeline stage in the hybrid architecture may include a bus switch and at least two shared processing nodes connected to the output of the bus switch. The bus switched may be configured to route an incoming packet to a selected one of the two processing nodes in that pipeline stage or may only route the incoming packet to the active node if the other node is undergoing partial reconfiguration. Configured in this way, the hybrid topology supports partial reconfiguration of the processing nodes without disrupting or limiting the operating frequency of the overall network.
    Type: Grant
    Filed: September 16, 2016
    Date of Patent: March 31, 2020
    Assignee: Altera Corporation
    Inventor: Evan Custodio
  • Publication number: 20200073464
    Abstract: Technologies for providing adaptive power management in an accelerator sled include an accelerator sled having circuitry to determine, based on (i) a total power budget for the accelerator sled, (ii) service level agreement (SLA) data indicative of a target performance of a kernel, and (iii) profile data indicative of a performance of the kernel as a function of a power utilization of the kernel, a power utilization limit for the kernel to be executed by an accelerator device on the accelerator sled. Additionally, the circuitry is to allocate the determined power utilization limit to the kernel and execute the kernel under the allocated power utilization limit.
    Type: Application
    Filed: April 25, 2019
    Publication date: March 5, 2020
    Inventors: Francesc Guim Bernat, Susanne M. Balle, Sujoy Sen, Evan Custodio, Paul H. Dormitzer
  • Publication number: 20200073849
    Abstract: Technologies for network interface controllers (NICs) include a computing device having a NIC coupled to a root FPGA via an I/O link. The root FPGA is further coupled to multiple worker FPGAs by a serial link with each worker FPGA. The NIC may receive a remote direct memory access (RDMA) message from a remote host and send the RDMA message to the root FPGA via the I/O link. The root FPGA determines a target FPGA based on a memory address of the RDMA message. Each FPGA is associated with a part of a unified address space. If the target FPGA is a worker FPGA, the root FPGA sends the RDMA message to the worker FPGA via the corresponding serial link, and the worker FPGA processes the RDMA message. If the root FPGA is the target, the root FPGA may process the RDMA message. Other embodiments are described and claimed.
    Type: Application
    Filed: May 3, 2019
    Publication date: March 5, 2020
    Inventors: Paul H. Dormitzer, Susanne M. Balle, Sujoy Sen, Evan Custodio
  • Publication number: 20200004712
    Abstract: Technologies for providing I/O channel abstraction for accelerator device kernels include an accelerator device comprising circuitry to obtain availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also configured to determine whether to establish a logical communication path between a kernel of the present accelerator device and another accelerator device kernel and establish, in response to a determination to establish the logical communication path as a function of the obtained availability data, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel.
    Type: Application
    Filed: December 28, 2018
    Publication date: January 2, 2020
    Inventors: Susanne M. Balle, Evan Custodio, Francesc Guim Bernat, Sujoy Sen, Slawomir Putyrski, Paul Dormitzer, Joseph Grecco
  • Publication number: 20190281132
    Abstract: Technologies for managing telemetry and sensor data on an edge networking platform are disclosed. According to one embodiment disclosed herein, a device monitors telemetry data associated with multiple services provided in the edge networking platform. The device identifies, for each of the services and as a function of the associated telemetry data, one or more service telemetry patterns. The device generates a profile including the identified service telemetry patterns.
    Type: Application
    Filed: May 17, 2019
    Publication date: September 12, 2019
    Inventors: Ramanathan Sethuraman, Timothy Verrall, Ned M. Smith, Thomas Willhalm, Brinda Ganesh, Francesc Guim Bernat, Karthik Kumar, Evan Custodio, Suraj Prabhakaran, Ignacio Astilleros Diez, Nilesh K. Jain, Ravi Iyer, Andrew J. Herdrich, Alexander Vul, Patrick G. Kutch, Kevin Bohan, Trevor Cooper
  • Publication number: 20190228166
    Abstract: Technologies for securely providing one or more remote accelerators hosted on edge resources to a client compute device includes a device that further includes an accelerator and one or more processors. The one or more processors are to determine whether to enable acceleration of an encrypted workload, receive, via an edge network, encrypted data from a client compute device, and transfer the encrypted data to the accelerator without exposing content of the encrypted data to the one or more processors. The accelerator is to receive, in response to a determination to enable the acceleration of the encrypted workload, an accelerator key from a secure server via a secured channel, and process, in response to a transfer of the encrypted data from the one or more processors, the encrypted data using the accelerator key.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 25, 2019
    Inventors: Ned M. Smith, Brinda Ganesh, Francesc Guim Bernat, Eoin Walsh, Evan Custodio
  • Publication number: 20190227843
    Abstract: Technologies for migrating data between edge accelerators hosted on different edge locations include a device hosted on a present edge location. The device includes one or more processors to: receive a workload from a requesting device, determine one or more accelerator devices hosted on the present edge location to perform the workload, and transmit the workload to the one or more accelerator devices to process the workload. The one or more processor is further to determine whether to perform data migration from the one or more accelerator devices to one or more different edge accelerator devices hosted on a different edge location, and send, in response to a determination to perform the data migration, a request to the one or more accelerator devices on the present edge location for transformed workload data to be processed by the one or more different edge accelerator devices.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 25, 2019
    Inventors: Evan Custodio, Francesc Guim Bernat, Suraj Prabhakaran, Trevor Cooper, Ned M. Smith, Kshitij Doshi, Petar Torre