Patents by Inventor Paul Dormitzer

Paul Dormitzer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11579788
    Abstract: Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: February 14, 2023
    Assignee: Intel Corporation
    Inventors: Henry Mitchel, Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer
  • Patent number: 11531635
    Abstract: Technologies for providing I/O channel abstraction for accelerator device kernels include an accelerator device comprising circuitry to obtain availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also configured to determine whether to establish a logical communication path between a kernel of the present accelerator device and another accelerator device kernel and establish, in response to a determination to establish the logical communication path as a function of the obtained availability data, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: December 20, 2022
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Evan Custodio, Francesc Guim Bernat, Sujoy Sen, Slawomir Putyrski, Paul Dormitzer, Joseph Grecco
  • Publication number: 20210334138
    Abstract: Technologies for pre-configuring accelerators by predicting bit-streams include communication circuitry and a compute device. The compute device includes a compute engine to determine one or more bit-streams registered on each accelerator of multiple accelerators. The compute engine is further to predict a next job to be requested for acceleration from an application of at least one compute sled of multiple compute sleds, predict a bit-stream from a bit-stream library that is to execute the predicted next job requested to be accelerated, and determine whether the predicted bit-stream is already registered on one of the accelerators. In response to a determination that the predicted bit-stream is not registered on one of the accelerators, the compute engine is to select an accelerator from the plurality of accelerators that satisfies characteristics of the predicted bit-stream and register the predicted bit-stream on the determined accelerator.
    Type: Application
    Filed: July 1, 2021
    Publication date: October 28, 2021
    Inventors: Francesc GUIM BERNAT, Susanne M. BALLE, Slawomir PUTYRSKI, Rahul KHANNA, Paul DORMITZER
  • Patent number: 11137922
    Abstract: Technologies for providing accelerated functions as a service in a disaggregated architecture include a compute device that is to receive a request for an accelerated task. The task is associated with a kernel usable by an accelerator sled communicatively coupled to the compute device to execute the task. The compute device is further to determine, in response to the request and with a database indicative of kernels and associated accelerator sleds, an accelerator sled that includes an accelerator device configured with the kernel associated with the request. Additionally, the compute device is to assign the task to the determined accelerator sled for execution. Other embodiments are also described and claimed.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: October 5, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry Mitchel, Rahul Khanna, Slawomir Putyrski, Sujoy Sen, Paul Dormitzer
  • Publication number: 20210141552
    Abstract: Technologies for providing accelerated functions as a service in a disaggregated architecture include a compute device that is to receive a request for an accelerated task. The task is associated with a kernel usable by an accelerator sled communicatively coupled to the compute device to execute the task. The compute device is further to determine, in response to the request and with a database indicative of kernels and associated accelerator sleds, an accelerator sled that includes an accelerator device configured with the kernel associated with the request. Additionally, the compute device is to assign the task to the determined accelerator sled for execution. Other embodiments are also described and claimed.
    Type: Application
    Filed: December 17, 2020
    Publication date: May 13, 2021
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry Mitchel, Rahul Khanna, Slawomir Putyrski, Sujoy Sen, Paul Dormitzer
  • Publication number: 20210073161
    Abstract: Technologies for providing I/O channel abstraction for accelerator device kernels include an accelerator device comprising circuitry to obtain availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also configured to determine whether to establish a logical communication path between a kernel of the present accelerator device and another accelerator device kernel and establish, in response to a determination to establish the logical communication path as a function of the obtained availability data, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel.
    Type: Application
    Filed: November 3, 2020
    Publication date: March 11, 2021
    Inventors: Susanne M. BALLE, Evan CUSTODIO, Francesc GUIM BERNAT, Sujoy SEN, Slawomir PUTYRSKI, Paul DORMITZER, Joseph GRECCO
  • Patent number: 10853296
    Abstract: Technologies for providing I/O channel abstraction for accelerator device kernels include an accelerator device comprising circuitry to obtain availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also configured to determine whether to establish a logical communication path between a kernel of the present accelerator device and another accelerator device kernel and establish, in response to a determination to establish the logical communication path as a function of the obtained availability data, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: December 1, 2020
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Evan Custodio, Francesc Guim Bernat, Sujoy Sen, Slawomir Putyrski, Paul Dormitzer, Joseph Grecco
  • Publication number: 20200356294
    Abstract: Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
    Type: Application
    Filed: July 30, 2020
    Publication date: November 12, 2020
    Inventors: Henry Mitchel, Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer
  • Patent number: 10768842
    Abstract: Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: September 8, 2020
    Assignee: Intel Corporation
    Inventors: Henry Mitchel, Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer
  • Patent number: 10712963
    Abstract: Technologies for encrypted data access by field-programmable gate array (FPGA) user kernels include a computing device having an FPGA and an external memory device accessible by the FPGA. The FPGA includes a secure key store, a micro-encryption engine, and multiple slots for user kernels that are each identifiable with an index. A user kernel is programmed at an index and a symmetric encryption key is provisioned to the secure key store at the index. The micro encryption engine may read encrypted data from the external memory device, decrypt the encrypted data with the key associated with the index of the user kernel, and forward plain text data to the user kernel. The micro encryption engine may also receive plain text data from the user kernel, encrypt the plain text data with the key, and write the encrypted data to the external memory device. Other embodiments are described and claimed.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: July 14, 2020
    Assignee: Intel Corporation
    Inventors: Rahul Khanna, Susanne M. Balle, Francesc Guim Bernat, Sujoy Sen, Paul Dormitzer
  • Patent number: 10554391
    Abstract: Technologies for allocating data storage capacity on a data storage sled include a plurality of data storage devices communicatively coupled to a plurality of network switches through a plurality of physical network connections and a data storage controller connected to the plurality of data storage devices. The data storage controller is to determine a target storage resource allocation to be used by one or more applications to be executed by one or more sleds in a data center, determine data storage capacity available for each of a plurality of different data storage types on the data storage sled, wherein each data storage type is associated with a different level of data redundancy, determine an amount of data storage capacity for each data storage type to be allocated to satisfy the target storage resource allocation, and adjust the amount of data storage capacity allocated to each data storage type.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: February 4, 2020
    Assignee: Intel Corporation
    Inventors: Steven Miller, Paul Dormitzer
  • Publication number: 20200004712
    Abstract: Technologies for providing I/O channel abstraction for accelerator device kernels include an accelerator device comprising circuitry to obtain availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also configured to determine whether to establish a logical communication path between a kernel of the present accelerator device and another accelerator device kernel and establish, in response to a determination to establish the logical communication path as a function of the obtained availability data, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel.
    Type: Application
    Filed: December 28, 2018
    Publication date: January 2, 2020
    Inventors: Susanne M. Balle, Evan Custodio, Francesc Guim Bernat, Sujoy Sen, Slawomir Putyrski, Paul Dormitzer, Joseph Grecco
  • Patent number: 10334334
    Abstract: Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises an array of storage devices and an array of memory. The storage devices and memory are directly coupled to storage resource processing circuits which are themselves, directly coupled to dual-mode optical network interface circuitry. The circuitry can store data on the storage devices and metadata associated with the data on non-volatile memory in the memory array.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: June 25, 2019
    Assignee: INTEL CORPORATION
    Inventors: Steven C. Miller, Michael Crocker, Aaron Gorius, Paul Dormitzer
  • Publication number: 20190065401
    Abstract: Technologies for providing efficient access to the memory of an accelerator device include an accelerator sled. The accelerator sled includes an accelerator device including a memory. The accelerator sled also includes a network interface controller that includes a memory access logic unit. The memory access logic unit is to determine a memory address region usable by a remote compute device to access the memory of the accelerator device, receive, from the remote compute device, a memory access request to remotely access the memory of the accelerator device associated with the memory address region, and perform, in response to the memory access request, a direct memory access operation on the memory. Other embodiments are also described and claimed.
    Type: Application
    Filed: December 29, 2017
    Publication date: February 28, 2019
    Inventor: Paul Dormitzer
  • Publication number: 20190068444
    Abstract: Technologies for providing efficient transfer of results from remote accelerator devices include a compute sled. The compute sled is to send a request to utilize an accelerator device on an accelerator sled. The request includes a data object to be processed by the accelerator device to increase the speed of execution of a workload associated with the data object. The compute sled is also to receive a modification map from the accelerator sled indicative of a modification to the data object. Further, the compute sled is to determine the modification to the data object based on the modification map and apply the modification to the data object in a memory device of the compute sled.
    Type: Application
    Filed: December 30, 2017
    Publication date: February 28, 2019
    Inventors: Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer, Henry Mitchel
  • Publication number: 20190065253
    Abstract: Technologies for pre-configuring accelerators by predicting bit-streams include communication circuitry and a compute device. The compute device includes a compute engine to determine one or more bit-streams registered on each accelerator of multiple accelerators. The compute engine is further to predict a next job to be requested for acceleration from an application of at least one compute sled of multiple compute sleds, predict a bit-stream from a bit-stream library that is to execute the predicted next job requested to be accelerated, and determine whether the predicted bit-stream is already registered on one of the accelerators. In response to a determination that the predicted bit-stream is not registered on one of the accelerators, the compute engine is to select an accelerator from the plurality of accelerators that satisfies characteristics of the predicted bit-stream and register the predicted bit-stream on the determined accelerator.
    Type: Application
    Filed: December 30, 2017
    Publication date: February 28, 2019
    Inventors: Francesc Guim Bernat, Susanne M. Balle, Slawomir Putyrski, Rahul Khanna, Paul Dormitzer
  • Publication number: 20190034102
    Abstract: Technologies for allocating data storage capacity on a data storage sled include a plurality of data storage devices communicatively coupled to a plurality of network switches through a plurality of physical network connections and a data storage controller connected to the plurality of data storage devices. The data storage controller is to determine a target storage resource allocation to be used by one or more applications to be executed by one or more sleds in a data center, determine data storage capacity available for each of a plurality of different data storage types on the data storage sled, wherein each data storage type is associated with a different level of data redundancy, determine an amount of data storage capacity for each data storage type to be allocated to satisfy the target storage resource allocation, and adjust the amount of data storage capacity allocated to each data storage type.
    Type: Application
    Filed: December 28, 2017
    Publication date: January 31, 2019
    Inventors: Steven MIller, Paul Dormitzer
  • Patent number: 10091904
    Abstract: Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises an array of storage devices and an array of memory. The storage devices and memory are directly coupled to storage resource processing circuits which are themselves, directly coupled to dual-mode optical network interface circuitry. The dual-mode optical network interface circuitry can have a bandwidth equal to or greater than the storage devices.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: October 2, 2018
    Assignee: INTEL CORPORATION
    Inventors: Steven C. Miller, Michael Crocker, Aaron Gorius, Paul Dormitzer
  • Patent number: 10034407
    Abstract: Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises mounting flanges to enable robotic insertion and removal from a rack and storage device mounting slots to enable robotic insertion and removal of storage devices into the sled. The storage devices are coupled to an optical fabric through storage resource controllers and a dual-mode optical network interface.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: July 24, 2018
    Assignee: Intel Corporation
    Inventors: Steven C. Miller, Michael Crocker, Aaron Gorius, Paul Dormitzer
  • Publication number: 20180150391
    Abstract: Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
    Type: Application
    Filed: September 30, 2017
    Publication date: May 31, 2018
    Inventors: Henry Mitchel, Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer