Patents by Inventor Susanne M. Balle

Susanne M. Balle has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220382944
    Abstract: Methods and apparatus for an extended inter-kernel communication protocol for discovery of accelerator pools configured in a non-star mode. Under a discovery algorithm, discovery requests are sent from a root node to non-root nodes in the accelerator pool using an inter-kernel communication protocol comprising a data transmission protocol built over a Media Access Control (MAC) layer and transported over links coupled between IO ports on accelerators. The discovery requests are used to discover each of the nodes in the accelerator pool and determine the topology of the nodes. During this process, MAC address table entries are generated at the various nodes comprising (key, value) pairs of MAC IO port addresses identifying destination nodes and that may be reached by each node and the shortest path to reach such destination nodes. The discovery algorithm may also be used to discover storage related information for the accelerators.
    Type: Application
    Filed: May 21, 2021
    Publication date: December 1, 2022
    Inventors: Han YIN, Xiaotong SUN, Susanne M. BALLE
  • Patent number: 11474700
    Abstract: Technologies for compressing communications for accelerator devices are disclosed. An accelerator device may include a communication abstraction logic units to manage communication with one or more remote accelerator devices. The communication abstraction logic unit may receive communication to and from a kernel on the accelerator device. The communication abstraction logic unit may compress and decompress the communication without instruction from the corresponding kernel. The communication abstraction logic unit may choose when and how to compress communications based on telemetry of the accelerator device and the remote accelerator device.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: October 18, 2022
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Evan Custodio, Francesc Guim Bernat
  • Publication number: 20220321438
    Abstract: Technologies for dynamically managing resources in disaggregated accelerators include an accelerator. The accelerator includes acceleration circuitry with multiple logic portions, each capable of executing a different workload. Additionally, the accelerator includes communication circuitry to receive a workload to be executed by a logic portion of the accelerator and a dynamic resource allocation logic unit to identify a resource utilization threshold associated with one or more shared resources of the accelerator to be used by a logic portion in the execution of the workload, limit, as a function of the resource utilization threshold, the utilization of the one or more shared resources by the logic portion as the logic portion executes the workload, and subsequently adjust the resource utilization threshold as the workload is executed. Other embodiments are also described and claimed.
    Type: Application
    Filed: April 29, 2022
    Publication date: October 6, 2022
    Inventors: Francesc GUIM BERNAT, Susanne M. BALLE, Rahul KHANNA, Sujoy SEN, Karthik KUMAR
  • Publication number: 20220321491
    Abstract: Examples described herein relate to a network interface device that includes circuitry to process data and circuitry to split a received flow of a mixture of control and data content and provide the control content to a control plane processor and provide the data content for access to the circuitry to process data, wherein the mixture of control and data content are received as part of a Remote Procedure Call. In some examples, provide the control content to a control plane processor, the circuitry is to remove data content from a received packet and include an indicator of a location of removed data content in the received packet.
    Type: Application
    Filed: June 20, 2022
    Publication date: October 6, 2022
    Inventors: Susanne M. BALLE, Shihwei CHIEN, Duane E. GALBI, Nagabhushan CHITLUR
  • Publication number: 20220321434
    Abstract: Reliability and performance of a data center is increased by processing telemetry data in a network device in the data center. A Telemetry Correlation Engine (TCE) in the network device correlates host level telemetry received from a compute node with low-level network device telemetry collected in the network device to identify performance bottlenecks for microservices based applications. The Telemetry Correlation Engine processes and analyzes the telemetry data from the compute node and network statistics available in the network device.
    Type: Application
    Filed: June 24, 2022
    Publication date: October 6, 2022
    Inventors: Andrzej KURIATA, Francesc GUIM BERNAT, Karthik KUMAR, Susanne M. BALLE, Alexander BACHMUTSKY, Duane E. GALBI, Nagabhushan CHITLUR, Sundar NADATHUR
  • Patent number: 11444866
    Abstract: Techniques for managing static and dynamic partitions in software-defined infrastructures (SDI) are described. An SDI manager component may include one or more processor circuits to access one or more resources. The SDI manager component may include a partition manager to create one or more partitions using the one or more resources, the one or more partitions each including a plurality of nodes of a similar resource type. The SDI manager may generate an update to a pre-composed partition table, stored within a non-transitory computer-readable storage medium, including the created one or more partitions, and receive a request from an orchestrator for a node. The SDI manager may select one of the created one or more partitions to the orchestrator based upon the pre-composed partition table, and identify the selected partition to the orchestrator. Other embodiments are described and claimed.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: September 13, 2022
    Assignee: INTEL CORPORATION
    Inventors: Daniel Rivas Barragan, Francesc Guim Bernat, Susanne M. Balle, John Chun Kwok Leung, Suraj Prabhakaran, Murugasamy K. Nachimuthu, Slawomir Putyrski
  • Patent number: 11429297
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Grant
    Filed: May 14, 2021
    Date of Patent: August 30, 2022
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan Custodio, Rahul Khanna, Sujoy Sen
  • Publication number: 20220269433
    Abstract: In an embodiment, an apparatus includes: a first downstream port to couple to a first peer device; a second downstream port to couple to a second peer device; and a peer-to-peer (PTP) circuit to receive a memory access request from the first peer device, the memory access request having a target associated with the second peer device, where the PTP circuit is to convert the memory access request from a coherent protocol to a memory protocol and send the converted memory access request to the second peer device. Other embodiments are described and claimed.
    Type: Application
    Filed: February 28, 2022
    Publication date: August 25, 2022
    Inventors: Rahul Pal, Susanne M. Balle, David Puffer, Nagabhushan Chitlur
  • Publication number: 20220206864
    Abstract: Examples described herein relate to causing execution of a workload on a device based on characteristics of the device and based on metadata associated with the device identifying execution requirements and software and hardware compatibilities between the device and a platform environment. In some examples, an accelerator device is selected to execute a workload based on characteristics of the accelerator device and based on software and hardware compatibilities between the device and a platform environment of the accelerator device.
    Type: Application
    Filed: March 14, 2022
    Publication date: June 30, 2022
    Inventors: Sundar NADATHUR, Susanne M. BALLE, Andrzej KURIATA, Duane E. GALBI, Nagabhushan CHITLUR, Francesc GUIM BERNAT, Alexander BACHMUTSKY
  • Publication number: 20220207358
    Abstract: An Infrastructure Processing Unit (IPU), including: a model optimization processor configured to optimize an artificial intelligence (Al) model for an accelerator managed by the IPU, and deploy the optimized Al model to the accelerator for execution of an inference; and a local memory configured to store data related to the Al model optimization.
    Type: Application
    Filed: September 21, 2021
    Publication date: June 30, 2022
    Inventors: Yamini Nimmagadda, Susanne M. Balle, Olugbemisola Oniyinde
  • Publication number: 20220179575
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Application
    Filed: February 25, 2022
    Publication date: June 9, 2022
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan CUSTODIO, Rahul Khanna, Sujoy Sen
  • Patent number: 11336547
    Abstract: Technologies for dynamically managing resources in disaggregated accelerators include an accelerator. The accelerator includes acceleration circuitry with multiple logic portions, each capable of executing a different workload. Additionally, the accelerator includes communication circuitry to receive a workload to be executed by a logic portion of the accelerator and a dynamic resource allocation logic unit to identify a resource utilization threshold associated with one or more shared resources of the accelerator to be used by a logic portion in the execution of the workload, limit, as a function of the resource utilization threshold, the utilization of the one or more shared resources by the logic portion as the logic portion executes the workload, and subsequently adjust the resource utilization threshold as the workload is executed. Other embodiments are also described and claimed.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: May 17, 2022
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Susanne M. Balle, Rahul Khanna, Sujoy Sen, Karthik Kumar
  • Publication number: 20220138025
    Abstract: Technologies for providing efficient reprovisioning in an accelerator device include an accelerator sled. The accelerator sled includes a memory and an accelerator device coupled to the memory. The accelerator device is to configure itself with a first bit stream to establish a first kernel, execute the first kernel to produce output data, write the output data to the memory, configure itself with a second bit stream to establish a second kernel, and execute the second kernel with the output data in the memory used as input data to the second kernel. Other embodiments are also described and claimed.
    Type: Application
    Filed: September 10, 2021
    Publication date: May 5, 2022
    Inventors: Evan Custodio, Susanne M. Balle, Francesc GUIM BERNAT, Slawomir Putyrski, Joe Grecco, Henry Mitchel
  • Publication number: 20220121455
    Abstract: Various systems and methods for implementing intent-based cluster administration are described herein. An orchestrator system includes: a processor; and memory to store instructions, which when executed by the processor, cause the orchestrator system to: receive, at the orchestrator system, an administrative intent-based service level objective (SLO) for an infrastructure configuration of an infrastructure; map the administrative intent-based SLO to a set of imperative policies; deploy the set of imperative policies to the infrastructure; monitor performance of the infrastructure; detect non-compliance with the set of imperative policies; and modify the administrative intent-based SLO to generate a revised set of imperative policies that cause the performance of the infrastructure to be compliant with the revised set of imperative policies.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 21, 2022
    Inventors: Adrian Hoban, Thijs Metsch, Francesc Guim Bernat, John J. Browne, Kshitij Arun Doshi, Mark Yarvis, Bin Li, Susanne M. Balle, Benjamin Walker, David Cremins, Mats Gustav Agerstam, Marcos E. Carranza, MIkko Ylinen, Dario Nicolas Oliver, John Mangan
  • Publication number: 20220124009
    Abstract: Various systems and methods for implementing intent-based orchestration in heterogenous compute platforms are described herein. An orchestration system is configured to: receive, at the orchestration system, a workload request for a workload, the workload request including an intent-based service level objective (SLO); generate rules for resource allocation based on the workload request; generate a deployment plan using the rules for resource allocation and the intent-based SLO; deploy the workload using the deployment plan; monitor performance of the workload using real-time telemetry; and modify the rules for resource allocation and the deployment plan based on the real-time telemetry.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 21, 2022
    Inventors: Thijs Metsch, Susanne M. Balle, Patrick Koeberl, Bin Li, Mark Yarvis, Adrian Hoban, Kshitij Arun Doshi, Francesc Guim Bernat, Cesar Martinez-Spessot, Mats Gustav Agerstam, Dario Nicolas Oliver, Marcos E. Carranza, John J. Browne, Mikko Ylinen, David Cremins
  • Publication number: 20220113911
    Abstract: Methods, apparatus, and software for remote storage of hardware microservices hosted on other processing units (XPUs) and SOC-XPU Platforms. The apparatus may be a platform including a System on Chip (SOC) and an XPU, such as a Field Programmable Gate Array (FPGA). Software, via execution on the SOC, enables the platform to pre-provision storage space on a remote storage node and assign the storage space to the platform, wherein the pre-provisioned storage space includes one or more container images to be implemented as one or more hardware (HW) microservice front-ends. The XPU/FPGA is configured to implement one or more accelerator functions used to accelerate HW microservice backend operations that are offloaded from the one or more HW microservice front-ends. The platform is also configured to pre-provision a remote storage volume containing worker node components and access and persistently store worker node components.
    Type: Application
    Filed: December 21, 2021
    Publication date: April 14, 2022
    Inventors: Andrzej KURIATA, Susanne M. BALLE, Duane E. GALBI, Sundar NADATHUR, Nagabhushan CHITLUR, Francesc GUIM BERNAT, Alexander BACHMUTSKY
  • Publication number: 20220114251
    Abstract: Various systems and methods for implementing reputation management and intent-based security mechanisms are described herein.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 14, 2022
    Inventors: Francesc Guim Bernat, Kshitij Arun Doshi, Adrian Hoban, Thijs Metsch, Dario Nicolas Oliver, Marcos E. Carranza, Mats Gustav Agerstam, Bin Li, Patrick Koeberl, Susanne M. Balle, John J. Browne, Cesar Martinez-Spessot, Ned M. Smith
  • Publication number: 20220116455
    Abstract: Various systems and methods for implementing computational storage are described herein.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 14, 2022
    Inventors: Arun Raghunath, Mohammad Chowdhury, Michael Mesnier, Ravishankar R. Iyer, Ian Adams, Thijs Metsch, John J. Browne, Adrian Hoban, Veeraraghavan Ramamurthy, Patrick Koeberl, Francesc Guim Bernat, Kshitij Arun Doshi, Susanne M. Balle, Bin Li
  • Patent number: 11290392
    Abstract: Technologies for pooling accelerators over fabric are disclosed. In the illustrative embodiment, an application may access an accelerator device over an application programming interface (API) and the API can access an accelerator device that is either local or a remote accelerator device that is located on a remote accelerator sled over a network fabric. The API may employ a send queue and a receive queue to send and receive command capsules to and from the accelerator sled.
    Type: Grant
    Filed: June 12, 2017
    Date of Patent: March 29, 2022
    Assignee: Intel Corporation
    Inventors: Sujoy Sen, Mohan J. Kumar, Donald L. Faw, Susanne M. Balle, Narayan Ranganathan
  • Patent number: 11269395
    Abstract: Technologies for providing adaptive power management in an accelerator sled include an accelerator sled having circuitry to determine, based on (i) a total power budget for the accelerator sled, (ii) service level agreement (SLA) data indicative of a target performance of a kernel, and (iii) profile data indicative of a performance of the kernel as a function of a power utilization of the kernel, a power utilization limit for the kernel to be executed by an accelerator device on the accelerator sled. Additionally, the circuitry is to allocate the determined power utilization limit to the kernel and execute the kernel under the allocated power utilization limit.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: March 8, 2022
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Susanne M. Balle, Sujoy Sen, Evan Custodio, Paul H. Dormitzer