Patents by Inventor Joe Grecco

Joe Grecco has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240143410
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Application
    Filed: January 5, 2024
    Publication date: May 2, 2024
    Applicant: Intel Corporation
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan Custodio, Rahul Khanna, Sujoy Sen
  • Patent number: 11907557
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Grant
    Filed: February 25, 2022
    Date of Patent: February 20, 2024
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan Custodio, Rahul Khanna, Sujoy Sen
  • Patent number: 11861424
    Abstract: Technologies for providing efficient reprovisioning in an accelerator device include an accelerator sled. The accelerator sled includes a memory and an accelerator device coupled to the memory. The accelerator device is to configure itself with a first bit stream to establish a first kernel, execute the first kernel to produce output data, write the output data to the memory, configure itself with a second bit stream to establish a second kernel, and execute the second kernel with the output data in the memory used as input data to the second kernel. Other embodiments are also described and claimed.
    Type: Grant
    Filed: September 10, 2021
    Date of Patent: January 2, 2024
    Assignee: Intel Corporation
    Inventors: Evan Custodio, Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel
  • Publication number: 20230195346
    Abstract: A compute device to manage workflow to disaggregated computing resources is provided. The compute device comprises a compute engine receive a workload processing request, the workload processing request defined by at least one request parameter, determine at least one accelerator device capable of processing a workload in accordance with the at least one request parameter, transmit a workload to the at least one accelerator device, receive a work product produced by the at least one accelerator device from the workload, and provide the work product to an application.
    Type: Application
    Filed: February 14, 2023
    Publication date: June 22, 2023
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry Mitchel, Slawomir Putyrski
  • Patent number: 11579788
    Abstract: Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: February 14, 2023
    Assignee: Intel Corporation
    Inventors: Henry Mitchel, Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer
  • Patent number: 11429297
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Grant
    Filed: May 14, 2021
    Date of Patent: August 30, 2022
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan Custodio, Rahul Khanna, Sujoy Sen
  • Publication number: 20220179575
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Application
    Filed: February 25, 2022
    Publication date: June 9, 2022
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan CUSTODIO, Rahul Khanna, Sujoy Sen
  • Publication number: 20220138025
    Abstract: Technologies for providing efficient reprovisioning in an accelerator device include an accelerator sled. The accelerator sled includes a memory and an accelerator device coupled to the memory. The accelerator device is to configure itself with a first bit stream to establish a first kernel, execute the first kernel to produce output data, write the output data to the memory, configure itself with a second bit stream to establish a second kernel, and execute the second kernel with the output data in the memory used as input data to the second kernel. Other embodiments are also described and claimed.
    Type: Application
    Filed: September 10, 2021
    Publication date: May 5, 2022
    Inventors: Evan Custodio, Susanne M. Balle, Francesc GUIM BERNAT, Slawomir Putyrski, Joe Grecco, Henry Mitchel
  • Publication number: 20210365199
    Abstract: A compute device to manage workflow to disaggregated computing resources is provided. The compute device comprises a compute engine receive a workload processing request, the workload processing request defined by at least one request parameter, determine at least one accelerator device capable of processing a workload in accordance with the at least one request parameter, transmit a workload to the at least one accelerator device, receive a work product produced by the at least one accelerator device from the workload, and provide the work product to an application.
    Type: Application
    Filed: April 2, 2021
    Publication date: November 25, 2021
    Inventors: Francesc Guim Bernat, Evan CUSTODIO, Susanne M. Balle, Joe Grecco, Henry Mitchel, Slawomir Putyrski
  • Publication number: 20210318823
    Abstract: Technologies for offloading acceleration task scheduling operations to accelerator sleds include a compute device to receive a request from a compute sled to accelerate the execution of a job, which includes a set of tasks. The compute device is also to analyze the request to generate metadata indicative of the tasks within the job, a type of acceleration associated with each task, and a data dependency between the tasks. Additionally the compute device is to send an availability request, including the metadata, to one or more micro-orchestrators of one or more accelerator sleds communicatively coupled to the compute device. The compute device is further to receive availability data from the one or more micro-orchestrators, indicative of which of the tasks the micro-orchestrator has accepted for acceleration on the associated accelerator sled. Additionally, the compute device is to assign the tasks to the one or more micro-orchestrators as a function of the availability data.
    Type: Application
    Filed: March 26, 2021
    Publication date: October 14, 2021
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry MITCHEL, Rahul Khanna, Evan CUSTODIO
  • Patent number: 11137922
    Abstract: Technologies for providing accelerated functions as a service in a disaggregated architecture include a compute device that is to receive a request for an accelerated task. The task is associated with a kernel usable by an accelerator sled communicatively coupled to the compute device to execute the task. The compute device is further to determine, in response to the request and with a database indicative of kernels and associated accelerator sleds, an accelerator sled that includes an accelerator device configured with the kernel associated with the request. Additionally, the compute device is to assign the task to the determined accelerator sled for execution. Other embodiments are also described and claimed.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: October 5, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry Mitchel, Rahul Khanna, Slawomir Putyrski, Sujoy Sen, Paul Dormitzer
  • Patent number: 11119835
    Abstract: Technologies for providing efficient reprovisioning in an accelerator device include an accelerator sled. The accelerator sled includes a memory and an accelerator device coupled to the memory. The accelerator device is to configure itself with a first bit stream to establish a first kernel, execute the first kernel to produce output data, write the output data to the memory, configure itself with a second bit stream to establish a second kernel, and execute the second kernel with the output data in the memory used as input data to the second kernel. Other embodiments are also described and claimed.
    Type: Grant
    Filed: December 30, 2017
    Date of Patent: September 14, 2021
    Assignee: Intel Corporation
    Inventors: Evan Custodio, Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel
  • Publication number: 20210271403
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Application
    Filed: May 14, 2021
    Publication date: September 2, 2021
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan CUSTODIO, Rahul Khanna, Sujoy Sen
  • Patent number: 11029870
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: June 8, 2021
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan Custodio, Rahul Khanna, Sujoy Sen
  • Publication number: 20210141552
    Abstract: Technologies for providing accelerated functions as a service in a disaggregated architecture include a compute device that is to receive a request for an accelerated task. The task is associated with a kernel usable by an accelerator sled communicatively coupled to the compute device to execute the task. The compute device is further to determine, in response to the request and with a database indicative of kernels and associated accelerator sleds, an accelerator sled that includes an accelerator device configured with the kernel associated with the request. Additionally, the compute device is to assign the task to the determined accelerator sled for execution. Other embodiments are also described and claimed.
    Type: Application
    Filed: December 17, 2020
    Publication date: May 13, 2021
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry Mitchel, Rahul Khanna, Slawomir Putyrski, Sujoy Sen, Paul Dormitzer
  • Patent number: 10990309
    Abstract: A compute device to manage workflow to disaggregated computing resources is provided. The compute device comprises a compute engine receive a workload processing request, the workload processing request defined by at least one request parameter, determine at least one accelerator device capable of processing a workload in accordance with the at least one request parameter, transmit a workload to the at least one accelerator device, receive a work product produced by the at least one accelerator device from the workload, and provide the work product to an application.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: April 27, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry Mitchel, Slawomir Putyrski
  • Patent number: 10963176
    Abstract: Technologies for offloading acceleration task scheduling operations to accelerator sleds include a compute device to receive a request from a compute sled to accelerate the execution of a job, which includes a set of tasks. The compute device is also to analyze the request to generate metadata indicative of the tasks within the job, a type of acceleration associated with each task, and a data dependency between the tasks. Additionally the compute device is to send an availability request, including the metadata, to one or more micro-orchestrators of one or more accelerator sleds communicatively coupled to the compute device. The compute device is further to receive availability data from the one or more micro-orchestrators, indicative of which of the tasks the micro-orchestrator has accepted for acceleration on the associated accelerator sled. Additionally, the compute device is to assign the tasks to the one or more micro-orchestrators as a function of the availability data.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: March 30, 2021
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Rahul Khanna, Evan Custodio
  • Publication number: 20200356294
    Abstract: Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
    Type: Application
    Filed: July 30, 2020
    Publication date: November 12, 2020
    Inventors: Henry Mitchel, Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer
  • Patent number: 10768842
    Abstract: Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: September 8, 2020
    Assignee: Intel Corporation
    Inventors: Henry Mitchel, Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer
  • Publication number: 20190068444
    Abstract: Technologies for providing efficient transfer of results from remote accelerator devices include a compute sled. The compute sled is to send a request to utilize an accelerator device on an accelerator sled. The request includes a data object to be processed by the accelerator device to increase the speed of execution of a workload associated with the data object. The compute sled is also to receive a modification map from the accelerator sled indicative of a modification to the data object. Further, the compute sled is to determine the modification to the data object based on the modification map and apply the modification to the data object in a memory device of the compute sled.
    Type: Application
    Filed: December 30, 2017
    Publication date: February 28, 2019
    Inventors: Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer, Henry Mitchel