Patents by Inventor Susanne M. Balle

Susanne M. Balle has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200348854
    Abstract: Technologies for compressing communications for accelerator devices are disclosed. An accelerator device may include a communication abstraction logic units to manage communication with one or more remote accelerator devices. The communication abstraction logic unit may receive communication to and from a kernel on the accelerator device. The communication abstraction logic unit may compress and decompress the communication without instruction from the corresponding kernel. The communication abstraction logic unit may choose when and how to compress communications based on telemetry of the accelerator device and the remote accelerator device.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 5, 2020
    Inventors: Susanne M. Balle, Evan Custodio, Francesc Guim Bernat
  • Patent number: 10824358
    Abstract: Technologies for dynamically managing the reliability of disaggregated resources in a managed node include a resource manager server. The resource manager server includes communication circuit to receive resource data from a set of disaggregated resources that indicates reliability of each disaggregated resource of the set of disaggregated resources and a node request to compose a managed node. The resource manager server further includes a compute engine to determine node parameters from the node request indicative of a target reliability of one or more disaggregated resources of the set of disaggregated resources to be included in the managed node, compose a managed node from the set of disaggregated resources that satisfies the node parameters by configuring the compute sled to utilize the disaggregated resources of the managed node for the execution of a workload, and monitor the disaggregated resources of the managed node for a failure.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: November 3, 2020
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Susanne M. Balle, Murugasamy K. Nachimuthu, Daniel Rivas Barragan
  • Patent number: 10823920
    Abstract: Technologies for allocating resources of managed nodes to workloads to balance multiple resource allocation objectives include an orchestrator server to receive resource allocation objective data indicative of multiple resource allocation objectives to be satisfied. The orchestrator server is additionally to determine an initial assignment of a set of workloads among the managed nodes and receive telemetry data from the managed nodes. The orchestrator server is further to determine, as a function of the telemetry data and the resource allocation objective data, an adjustment to the assignment of the workloads to increase an achievement of at least one of the resource allocation objectives without decreasing an achievement of another of the resource allocation objectives, and apply the adjustments to the assignments of the workloads among the managed nodes as the workloads are performed. Other embodiments are also described and claimed.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: November 3, 2020
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Rahul Khanna, Nishi Ahuja, Mrittika Ganguli
  • Publication number: 20200341824
    Abstract: Technologies for providing inter-kernel communication abstraction to support scale-up and scale-out include an accelerator device. The accelerator device includes circuitry to receive, from a kernel of the present accelerator device, a request through an application programming interface exposed to a high level software language in which the kernel of the present accelerator device is implemented, to establish a logical communication path between the kernel of the present accelerator device and a target accelerator device kernel, based on one or more physical communication paths. Additionally, the circuitry is to establish, in response to the request, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel and communicate data between the kernel of the present accelerator device and the other accelerator device kernel with a unified communication protocol that manages differences between the physical communication paths.
    Type: Application
    Filed: April 26, 2019
    Publication date: October 29, 2020
    Inventors: Susanne M. Balle, Evan Custodio, Narayan Ranganathan, Paul H. Dormitzer
  • Patent number: 10789189
    Abstract: Technologies for providing inter-kernel flow control for accelerator device kernels includes an accelerator device. The accelerator device includes circuitry to determine availability data indicative of an availability of one or more accelerator device kernels in a system. The availability data includes credit data indicative of a number of data packets permitted to be sent from an output port associated with a kernel of the present accelerator device to an input port associated with another accelerator device kernel. The circuitry is also to obtain a data packet to be processed by a target accelerator device kernel in the system. Additionally, the circuitry is to determine, as a function of the credit data, an output port to send the data packet through to provide the data packet to the target accelerator device kernel. Additionally, the circuitry is to send the data packet through the determined output port.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: September 29, 2020
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Evan Custodio
  • Patent number: 10771870
    Abstract: Technologies for dynamically allocating resources among a set of managed nodes include an orchestrator server to receive telemetry data from the managed nodes indicative of resource utilization and workload performance by the managed nodes as the workloads are executed, generate a resource allocation map indicative of allocations of resources among the managed nodes, determine, as a function of the telemetry data and the resource allocation map, a dynamic adjustment to allocation of resources to at least one of the managed nodes to improve performance of at least one of the workloads executed on the at least one of the managed nodes, and apply the adjustment to the allocation of the resources among the managed nodes as the workloads are executed. Other embodiments are also described and claimed.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: September 8, 2020
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Rahul Khanna, Nishi Ahuja, Mrittika Ganguli
  • Patent number: 10768842
    Abstract: Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: September 8, 2020
    Assignee: Intel Corporation
    Inventors: Henry Mitchel, Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer
  • Patent number: 10735835
    Abstract: Technologies for allocating resources of a set of managed nodes to workloads to manage heat generation include an orchestrator server to receive resource allocation objective data including a target temperature for one or more of the managed nodes. The orchestrator server is also to determine an initial assignment of a set of workloads among the managed nodes, receive telemetry data from the managed nodes indicative of resource utilization by each of the managed nodes and one or more temperatures and fan speeds of the managed nodes as the workloads are performed, predict future heat generation of the workloads as a function of the telemetry data, determine, as a function of the predicted future heat generation, an adjustment to the assignment of the workloads to achieve the target temperature, and apply the adjustments to the assignments of the workloads among the managed nodes as the workloads are performed.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: August 4, 2020
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Rahul Khanna, Nishi Ahuja, Mrittika Ganguli
  • Publication number: 20200228626
    Abstract: Technologies for providing advanced resource management in a disaggregated environment include a compute device. The compute device includes circuitry to obtain a workload to be executed by a set of resources in a disaggregated system, query a sled in the disaggregated system to identify an estimated time to complete execution of a portion of the workload to be accelerated using a kernel, and assign, in response to a determination that the estimated time to complete execution of the portion of the workload satisfies a target quality of service associated with the workload, the portion of the workload to the sled for acceleration.
    Type: Application
    Filed: March 25, 2020
    Publication date: July 16, 2020
    Inventors: Francesc Guim Bernat, Slawomir Putyrski, Susanne M. Balle, Thomas Willhalm, Karthik Kumar
  • Patent number: 10712963
    Abstract: Technologies for encrypted data access by field-programmable gate array (FPGA) user kernels include a computing device having an FPGA and an external memory device accessible by the FPGA. The FPGA includes a secure key store, a micro-encryption engine, and multiple slots for user kernels that are each identifiable with an index. A user kernel is programmed at an index and a symmetric encryption key is provisioned to the secure key store at the index. The micro encryption engine may read encrypted data from the external memory device, decrypt the encrypted data with the key associated with the index of the user kernel, and forward plain text data to the user kernel. The micro encryption engine may also receive plain text data from the user kernel, encrypt the plain text data with the key, and write the encrypted data to the external memory device. Other embodiments are described and claimed.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: July 14, 2020
    Assignee: Intel Corporation
    Inventors: Rahul Khanna, Susanne M. Balle, Francesc Guim Bernat, Sujoy Sen, Paul Dormitzer
  • Patent number: 10686688
    Abstract: Techniques for reducing fragmentation in software-defined infrastructures are described. A compute node, including one or more processor circuits, may be configured to access one or more remote resources via a fabric, the compute node may be configured to receive a dynamic tolerated fragmentation for the one or more remote resources. The compute node may be configured to monitor the performance of the one or more remote resources. For example, the compute node may be configured to monitor if one or more of the monitored resources were to exceed a threshold bandwidth or latency range as defined by the dynamic tolerated fragmentation. The compute node may be configured to determine that the monitored performance of the one or more remote resources is outside a threshold defined by the dynamic tolerated fragmentation.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: June 16, 2020
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Susanne M. Balle, Daniel Rivas Barragan, John Chun Kwok Leung, Suraj Prabhakaran, Murugasamy K. Nachimuthu, Slawomir Putyrski
  • Patent number: 10678737
    Abstract: Technologies for providing dynamic communication path modification for accelerator device kernels include an accelerator device comprising circuitry to obtain initial availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also to produce, as a function of the initial availability data, a connectivity matrix indicative of the physical communication paths and a logical communication path defined by one or more of the physical communication paths between a kernel of the present accelerator device and a target accelerator device kernel. Additionally, the circuitry is to obtain updated availability data indicative of a subsequent availability of each accelerator device kernel and update, as a function of the updated availability data, the connectivity matrix to modify the logical communication path.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: June 9, 2020
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Slawomir Putyrski, Joseph Grecco, Evan Custodio, Francesc Guim Bernat
  • Patent number: 10616668
    Abstract: Technologies for allocating resources of a set of managed nodes to workloads based on resource utilization phase residencies include an orchestrator server to receive resource allocation objective data and determine an assignment of a set of workloads among the managed nodes. The orchestrator server is further to receive telemetry data from the managed nodes, determine, as a function of the telemetry data, phase residency data, determine, as a function of at least the phase residency data and the resource allocation objective data, an adjustment to the assignment of the workloads to increase an achievement of at least one of the resource allocation objectives without decreasing the achievement of any of the other resource allocation objectives, and apply the adjustment to the assignments of the workloads among the managed nodes as the workloads are performed.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: April 7, 2020
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Rahul Khanna, Nishi Ahuja, Mrittika Ganguli
  • Publication number: 20200104275
    Abstract: Some examples provide a manner of a memory transaction requester to configure a target to recognize a memory address as a non-local or non-shared address. An intermediary between the requester and the target configures a control plane layer of the target to recognize that a memory transaction involving the memory address is to be performed using a direct memory access operation. The intermediary is connected to the requester as a local device or process. After configuration, a memory transaction provided to the target with the configured memory address causes the target to invoke use of the associated direct memory access operation to retrieve content associated with the memory address or write content using a direct memory access operation.
    Type: Application
    Filed: December 2, 2019
    Publication date: April 2, 2020
    Inventors: Sujoy SEN, Susanne M. BALLE, Narayan RANGANATHAN, Bradley A. BURRES
  • Publication number: 20200073849
    Abstract: Technologies for network interface controllers (NICs) include a computing device having a NIC coupled to a root FPGA via an I/O link. The root FPGA is further coupled to multiple worker FPGAs by a serial link with each worker FPGA. The NIC may receive a remote direct memory access (RDMA) message from a remote host and send the RDMA message to the root FPGA via the I/O link. The root FPGA determines a target FPGA based on a memory address of the RDMA message. Each FPGA is associated with a part of a unified address space. If the target FPGA is a worker FPGA, the root FPGA sends the RDMA message to the worker FPGA via the corresponding serial link, and the worker FPGA processes the RDMA message. If the root FPGA is the target, the root FPGA may process the RDMA message. Other embodiments are described and claimed.
    Type: Application
    Filed: May 3, 2019
    Publication date: March 5, 2020
    Inventors: Paul H. Dormitzer, Susanne M. Balle, Sujoy Sen, Evan Custodio
  • Publication number: 20200073464
    Abstract: Technologies for providing adaptive power management in an accelerator sled include an accelerator sled having circuitry to determine, based on (i) a total power budget for the accelerator sled, (ii) service level agreement (SLA) data indicative of a target performance of a kernel, and (iii) profile data indicative of a performance of the kernel as a function of a power utilization of the kernel, a power utilization limit for the kernel to be executed by an accelerator device on the accelerator sled. Additionally, the circuitry is to allocate the determined power utilization limit to the kernel and execute the kernel under the allocated power utilization limit.
    Type: Application
    Filed: April 25, 2019
    Publication date: March 5, 2020
    Inventors: Francesc Guim Bernat, Susanne M. Balle, Sujoy Sen, Evan Custodio, Paul H. Dormitzer
  • Patent number: 10579547
    Abstract: Technologies for providing I/O channel abstraction for accelerator device kernels include an accelerator device comprising circuitry to obtain availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also configured to receive a request to establish a logical communication path between a kernel of the present accelerator device and another accelerator device kernel and establish, in response to the request and as a function of the obtained availability data, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: March 3, 2020
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Evan Custudio
  • Publication number: 20200004712
    Abstract: Technologies for providing I/O channel abstraction for accelerator device kernels include an accelerator device comprising circuitry to obtain availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also configured to determine whether to establish a logical communication path between a kernel of the present accelerator device and another accelerator device kernel and establish, in response to a determination to establish the logical communication path as a function of the obtained availability data, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel.
    Type: Application
    Filed: December 28, 2018
    Publication date: January 2, 2020
    Inventors: Susanne M. Balle, Evan Custodio, Francesc Guim Bernat, Sujoy Sen, Slawomir Putyrski, Paul Dormitzer, Joseph Grecco
  • Publication number: 20190372914
    Abstract: Technologies for network interface controllers (NICs) include a compute sled and an accelerator sled in communication over a network. The accelerator sled configures a virtual switch endpoint associated with a remote direct memory access (RDMA) server instance that is associated with a field-programmable gate array (FPGA) of the accelerator sled. The accelerator sled updates local software defined networking (SDN) tables with a virtual tunnel associated with the virtual switch endpoint and a remote compute sled. A virtual switch of the accelerator sled switches virtual tunnel traffic from the remote compute sled to the RDMA server instance, which transfers data to or from the FPGA. The compute sled also updates a local SDN table with the virtual tunnel, and a virtual switch of the compute sled switches virtual tunnel traffic to or from the accelerator sled. Other embodiments are described and claimed.
    Type: Application
    Filed: August 14, 2019
    Publication date: December 5, 2019
    Inventors: Mrittika Ganguli, Sugesh Chandran, Parthasarathy Sarangam, Sujoy Sen, Susanne M. Balle, Rajesh Sankaran
  • Patent number: 10461774
    Abstract: Technologies for assigning workloads based on resource utilization phases include an orchestrator server to assign a set of workloads to the managed nodes. The orchestrator server is also to receive telemetry data from the managed nodes and identify, as a function of the telemetry data, historical resource utilization phases of the workloads. Further, the orchestrator server is to determine, as a function of the historical resource utilization phases and as the workloads are performed, predicted resource utilization phases for the workloads, and apply, as a function of the predicted resources utilization phases, adjustments to the assignments of the workloads among the managed nodes as the workloads are performed.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: October 29, 2019
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Rahul Khanna, Nishi Ahuja, Mrittika Ganguli