Patents by Inventor Evan Custodio

Evan Custodio has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190141121
    Abstract: Technologies for determining a set of edge resources to offload a workload from a client compute device based on a brokering logic provided by a service provider include a device that includes circuitry that is in communication with edge resources. The circuitry is to receive a brokering logic from a service provider receive a request from a client compute device, wherein the request includes a function to be used to execute the request and one or more parameters associated with the client compute device, determine the one or more parameters, select, as a function of the one or more parameters and the brokering logic, a physical implementation to perform the function, wherein the physical implementation indicates a set of edge resources and a performance level for each edge resource of the set of edge resources, and perform, in response to a selection of the physical implementation, the request using the set of edge resources associated with the physical implementation.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 9, 2019
    Inventors: Francesc Guim Bernat, Ned Smith, Evan Custodio, Suraj Prabhkaran, Ignacio Astilleros Diez
  • Publication number: 20190138481
    Abstract: Technologies for providing dynamic communication path modification for accelerator device kernels include an accelerator device comprising circuitry to obtain initial availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also to produce, as a function of the initial availability data, a connectivity matrix indicative of the physical communication paths and a logical communication path defined by one or more of the physical communication paths between a kernel of the present accelerator device and a target accelerator device kernel. Additionally, the circuitry is to obtain updated availability data indicative of a subsequent availability of each accelerator device kernel and update, as a function of the updated availability data, the connectivity matrix to modify the logical communication path.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 9, 2019
    Inventors: Susanne M. Balle, Slawomir Putyrski, Joseph Grecco, Evan Custodio, Francesc Guim Bernat
  • Publication number: 20190068444
    Abstract: Technologies for providing efficient transfer of results from remote accelerator devices include a compute sled. The compute sled is to send a request to utilize an accelerator device on an accelerator sled. The request includes a data object to be processed by the accelerator device to increase the speed of execution of a workload associated with the data object. The compute sled is also to receive a modification map from the accelerator sled indicative of a modification to the data object. Further, the compute sled is to determine the modification to the data object based on the modification map and apply the modification to the data object in a memory device of the compute sled.
    Type: Application
    Filed: December 30, 2017
    Publication date: February 28, 2019
    Inventors: Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer, Henry Mitchel
  • Publication number: 20190065281
    Abstract: Technologies for auto-migration in accelerated architectures include multiple compute sleds, accelerator sleds, and storage sleds. Each of the compute sleds includes phase detection logic to receive an indication from an application presently executing on the compute sled that indicates a compute kernel associated with the application has been offloaded to a field-programmable gate array (FPGA) of an accelerator sled. The phase detection logic is further to monitor a plurality of hardware threads associated with the application, detect whether a phase change has been detected as a function of the monitored hardware threads, and migrate, in response to having detected the phase change, the hardware threads to another compute element having a lower-performance central processing unit (CPU) relative to the CPU the application is presently being executed on. Other embodiments are described herein.
    Type: Application
    Filed: December 30, 2017
    Publication date: February 28, 2019
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Ramamurthy Krithivas, Karthik Kumar
  • Publication number: 20190065260
    Abstract: Technologies for scaling provisioning of kernel instances in a system as a function of a topology of accelerated kernels include a compute device having a compute engine. The compute engine receives, from a sled, a kernel configuration request to provision a kernel on an accelerator device. The sled is to execute a workload. The kernel accelerates a task in the workload. The compute engine determines, as a function of one or more requirements of the workload, a topology of kernels to service the request. The topology maps data communication between kernels. The compute engine configures the kernel on the accelerator device according to the determined topology.
    Type: Application
    Filed: December 29, 2017
    Publication date: February 28, 2019
    Inventors: Susanne M. Balle, Evan Custodio, Francesc Guim Bernat, Slawomir Putyrski
  • Publication number: 20190065083
    Abstract: Technologies for providing efficient access to pooled accelerator devices include an accelerator sled. The accelerator sled includes an accelerator device and a controller connected to the accelerator device. The controller is to provide, to a compute sled, accelerator abstraction data. The accelerator abstraction data represents the accelerator device as one or more logical devices, each logical device having one or more memory regions accessible by the compute sled, and defines an access mode usable to access each corresponding memory region. The controller is further to receive, from the compute sled, a request to perform an operation on an identified memory region of the accelerator device with a corresponding access mode. Additionally, the controller is to convert the request from a first format to a second format that is different from the second format and is usable by the accelerator device to perform the operation.
    Type: Application
    Filed: December 29, 2017
    Publication date: February 28, 2019
    Inventors: Sujoy Sen, Susanne M. Balle, Narayan Ranganathan, Evan Custodio, Paul H. Dormitzer, Francesc Guim Bernat
  • Publication number: 20190068464
    Abstract: Technologies for adapting a communication protocol (e.g., TCP/IP, UDP, etc.) to network communications between endpoints (e.g., accelerated kernels configured within accelerator devices) include a sled having a compute engine. The compute engine monitors telemetry data associated with one or more network communications between a given kernel and another kernel. The network communications are established via a given communication protocol. The compute engine determines, as a function of the monitored telemetry data, that a condition to change the network communications from the communication protocol to another communication protocol is triggered. The compute engine shifts the network communications to the other communication protocol.
    Type: Application
    Filed: December 29, 2017
    Publication date: February 28, 2019
    Inventors: Francesc Guim Bernat, Susanne M. Balle, Rahul Khanna, Evan Custodio
  • Publication number: 20190065290
    Abstract: Technologies for providing efficient reprovisioning in an accelerator device include an accelerator sled. The accelerator sled includes a memory and an accelerator device coupled to the memory. The accelerator device is to configure itself with a first bit stream to establish a first kernel, execute the first kernel to produce output data, write the output data to the memory, configure itself with a second bit stream to establish a second kernel, and execute the second kernel with the output data in the memory used as input data to the second kernel. Other embodiments are also described and claimed.
    Type: Application
    Filed: December 30, 2017
    Publication date: February 28, 2019
    Inventors: Evan Custodio, Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry MItchel
  • Publication number: 20190052274
    Abstract: A device includes a programmable logic fabric. The programmable logic fabric includes a first area, wherein a first persona is configured to be programmed in the first area. The programmable logic fabric also includes a second area, wherein a second persona is configured to be programmed in the second area in a second persona programming time. The device is configured to be controlled by a host to switch from running the first persona to running the second persona in a time less than the second persona programming time.
    Type: Application
    Filed: September 10, 2018
    Publication date: February 14, 2019
    Inventors: David Alexander Munday, Randall Carl Bilbrey, JR., Evan Custodio
  • Publication number: 20190042533
    Abstract: Systems, methods, and devices for enhancing the flexibility of an integrated circuit device with partially reconfigurable regions are provided. For example, a discovery interface may determine and/or communicate a suitable logical protocol interface to control data transfer between regions on the integrated circuit device. The techniques provided herein result in more flexible partial reconfiguration options to enable greater compatibility between accelerator hosts and accelerator function units.
    Type: Application
    Filed: January 4, 2018
    Publication date: February 7, 2019
    Inventor: Evan Custodio
  • Patent number: 10075167
    Abstract: A device includes a programmable logic fabric. The programmable logic fabric includes a first area, wherein a first persona is configured to be programmed in the first area. The programmable logic fabric also includes a second area, wherein a second persona is configured to be programmed in the second area in a second persona programming time. The device is configured to be controlled by a host to switch from running the first persona to running the second persona in a time less than the second persona programming time.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: September 11, 2018
    Assignee: Altera Corporation
    Inventors: David Alexander Munday, Randall Carl Bilbrey, Jr., Evan Custodio
  • Publication number: 20180150298
    Abstract: Technologies for offloading acceleration task scheduling operations to accelerator sleds include a compute device to receive a request from a compute sled to accelerate the execution of a job, which includes a set of tasks. The compute device is also to analyze the request to generate metadata indicative of the tasks within the job, a type of acceleration associated with each task, and a data dependency between the tasks. Additionally the compute device is to send an availability request, including the metadata, to one or more micro-orchestrators of one or more accelerator sleds communicatively coupled to the compute device. The compute device is further to receive availability data from the one or more micro-orchestrators, indicative of which of the tasks the micro-orchestrator has accepted for acceleration on the associated accelerator sled. Additionally, the compute device is to assign the tasks to the one or more micro-orchestrators as a function of the availability data.
    Type: Application
    Filed: September 30, 2017
    Publication date: May 31, 2018
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry MItchel, Rahul Khanna, Evan Custodio
  • Publication number: 20180150391
    Abstract: Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
    Type: Application
    Filed: September 30, 2017
    Publication date: May 31, 2018
    Inventors: Henry Mitchel, Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer
  • Publication number: 20180150299
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Application
    Filed: September 30, 2017
    Publication date: May 31, 2018
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan Custodio, Rahul Khanna, Sujoy Sen
  • Publication number: 20180150334
    Abstract: Technologies for providing accelerated functions as a service in a disaggregated architecture include a compute device that is to receive a request for an accelerated task. The task is associated with a kernel usable by an accelerator sled communicatively coupled to the compute device to execute the task. The compute device is further to determine, in response to the request and with a database indicative of kernels and associated accelerator sleds, an accelerator sled that includes an accelerator device configured with the kernel associated with the request. Additionally, the compute device is to assign the task to the determined accelerator sled for execution.
    Type: Application
    Filed: September 29, 2017
    Publication date: May 31, 2018
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry MItchel, Rahul Khanna, Slawomir Putyrski, Sujoy Sen, Paul Dormitzer
  • Publication number: 20180150330
    Abstract: A compute device to manage workflow to disaggregated computing resources is provided. The compute device comprises a compute engine receive a workload processing request, the workload processing request defined by at least one request parameter, determine at least one accelerator device capable of processing a workload in accordance with the at least one request parameter, transmit a workload to the at least one accelerator device, receive a work product produced by the at least one accelerator device from the workload, and provide the work product to an application.
    Type: Application
    Filed: September 30, 2017
    Publication date: May 31, 2018
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry Mitchel, Slawomir Putyrski
  • Publication number: 20180081840
    Abstract: A programmable integrated circuit that can support partial reconfiguration is provided. The programmable integrated circuit may include multiple processing nodes that serve as accelerator blocks for an associated host processor that is communicating with the integrated circuit. The processing nodes may be connected in a hybrid shared-pipelined topology. Each pipeline stage in the hybrid architecture may include a bus switch and at least two shared processing nodes connected to the output of the bus switch. The bus switched may be configured to route an incoming packet to a selected one of the two processing nodes in that pipeline stage or may only route the incoming packet to the active node if the other node is undergoing partial reconfiguration. Configured in this way, the hybrid topology supports partial reconfiguration of the processing nodes without disrupting or limiting the operating frequency of the overall network.
    Type: Application
    Filed: September 16, 2016
    Publication date: March 22, 2018
    Inventor: Evan Custodio
  • Publication number: 20180076814
    Abstract: A device includes a programmable logic fabric. The programmable logic fabric includes a first area, wherein a first persona is configured to be programmed in the first area. The programmable logic fabric also includes a second area, wherein a second persona is configured to be programmed in the second area in a second persona programming time. The device is configured to be controlled by a host to switch from running the first persona to running the second persona in a time less than the second persona programming time.
    Type: Application
    Filed: November 20, 2017
    Publication date: March 15, 2018
    Inventors: David Alexander Munday, Randall Carl Bilbrey, Evan Custodio
  • Patent number: 9825635
    Abstract: A device includes a programmable logic fabric. The programmable logic fabric includes a first area, wherein a first persona is configured to be programmed in the first area. The programmable logic fabric also includes a second area, wherein a second persona is configured to be programmed in the second area in a second persona programming time. The device is configured to be controlled by a host to switch from running the first persona to running the second persona in a time less than the second persona programming time.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: November 21, 2017
    Assignee: Altera Corporation
    Inventors: David Alexander Munday, Randall Carl Bilbrey, Jr., Evan Custodio
  • Publication number: 20170099053
    Abstract: A device includes a programmable logic fabric. The programmable logic fabric includes a first area, wherein a first persona is configured to be programmed in the first area. The programmable logic fabric also includes a second area, wherein a second persona is configured to be programmed in the second area in a second persona programming time. The device is configured to be controlled by a host to switch from running the first persona to running the second persona in a time less than the second persona programming time.
    Type: Application
    Filed: October 27, 2016
    Publication date: April 6, 2017
    Inventors: David Alexander Munday, Randall Carl Bilbrey, JR., Evan Custodio