Patents by Inventor Francesc Guim

Francesc Guim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240195605
    Abstract: Technologies for dynamic accelerator selection include a compute sled. The compute sled includes a network interface controller to communicate with a remote accelerator of an accelerator sled over a network, where the network interface controller includes a local accelerator and a compute engine. The compute engine is to obtain network telemetry data indicative of a level of bandwidth saturation of the network. The compute engine is also to determine whether to accelerate a function managed by the compute sled. The compute engine is further to determine, in response to a determination to accelerate the function, whether to offload the function to the remote accelerator of the accelerator sled based on the telemetry data. Also the compute engine is to assign, in response a determination not to offload the function to the remote accelerator, the function to the local accelerator of the network interface controller.
    Type: Application
    Filed: December 15, 2023
    Publication date: June 13, 2024
    Inventor: Francesc Guim Bernat
  • Publication number: 20240193284
    Abstract: Techniques and mechanisms to allocate functionality of a chiplet for access by one or more processor cores which are coupled to remote processor via a network switch. In an embodiment, a composite chip communicates with the switch via a Compute Express Link (CXL) link. The switch receives capability information which identifies both a chiplet of the composite chip, and a functionality which is available from a resource of that chiplet. Based on the capability information, the switch provides an inventory of chiplet resources. In response to an allocation request, the switch accesses the inventory to identify whether a suitable chiplet resource is available. Based on the access, the switch configures a chip to enable an allocation of a chiplet resource. In another embodiment, the chiplet resource is allocated at a sub-processor level of granularity, and disables access to the chiplet resource by one or more local processor cores.
    Type: Application
    Filed: December 13, 2022
    Publication date: June 13, 2024
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Marcos Carranza, Kshitij Doshi, Ned Smith, Karthik Kumar
  • Publication number: 20240193617
    Abstract: Systems, apparatus, articles of manufacture, and methods are disclosed. An example apparatus includes programmable circuitry to at least: obtain a first response associated with an estimate of emissions to be produced by execution of a workload on first hardware; obtain a second response associated with an estimate of emissions to be produced by execution of the workload on second hardware; and assign one of the first or the second hardware to execute the workload based on the first response and the second response, the assigned one of the first or the second hardware to at least one of utilize more time or more memory to execute the workload than the other of the first or the second hardware.
    Type: Application
    Filed: December 15, 2023
    Publication date: June 13, 2024
    Inventors: Francesc Guim Bernat, Karthik Kumar, Akhilesh S. Thyagaturu, Thijs Metsch, Adrian Hoban
  • Publication number: 20240185714
    Abstract: The disclosure relates to systems, methods, and devices for managing traffic through a road segment and/or intersection. The traffic management system may place traffic objects in a collaboration group for coordinating movements in the road segment and/or intersection in response to a received indication that an emergency vehicle has a planned route that includes the road segment and/or intersection. The traffic management system may determine a movement plan for each traffic object in the collaboration group based on received measurements about the road segment and the planned route of the emergency vehicle. The traffic management system may control a transmitter to send the movement plan to each traffic object in the collaboration group.
    Type: Application
    Filed: September 24, 2021
    Publication date: June 6, 2024
    Inventors: Satish JHA, Kathiravetpillai SIVANESAN, S M Iftekharul ALAM, Kuilin Clark CHEN, Kshitij DOSHI, Leonardo GOMES BALTAR, Francesc GUIM BERNAT, Arvind MERWADAY, Markus Dominik MUECK, Suman A. SEHRA, Vesh Raj SHARMA BANJADE, Soo Jin TAN
  • Publication number: 20240179578
    Abstract: Systems, apparatus, articles of manufacture, and methods are disclosed to manage network slices. An example apparatus includes interface circuitry to acquire network information, machine-readable instructions, and at least one processor circuit to be programmed by the machine-readable instructions to reserve first network slices to satisfy service level objectives (SLOs) corresponding to first nodes, reserve second network slices to satisfy SLOs corresponding to second nodes, and reconfigure the first network slices to accept network communications from the second nodes when the network communications from the second nodes exceed a performance metric threshold.
    Type: Application
    Filed: January 30, 2024
    Publication date: May 30, 2024
    Inventors: Akhilesh Shivanna Thyagaturu, Hassnaa Moustafa Ep. Yehia, Jing Zhu, Karthik Kumar, Shu-Ping Yeh, Henning Schroeder, Menglei Zhang, Mohit Kumar Garg, Shiva Radhakrishnan Iyer, Francesc Guim Bernat
  • Publication number: 20240179078
    Abstract: Embodiments may be generally directed to techniques to cause communication of a registration request between a first end-point and a second end-point of an end-to-end path, the registration request to establish resource load monitoring for one or more resources of the end-to-end path, receive one or more acknowledgements indicating resource loads for each of the one or more resources of the end-to-end path, at least one of the acknowledgements to indicate a resource of the one or more resources is not meeting a threshold requirement for the end-to-end path, and perform an action for communication traffic utilizing the one or more resources based on the acknowledgement.
    Type: Application
    Filed: December 1, 2023
    Publication date: May 30, 2024
    Applicant: INTEL CORPORATION
    Inventors: FRANCESC GUIM BERNAT, KSHITIJ A. DOSHI, DANIEL RIVAS BARRAGAN, MARK A. SCHMISSEUR, STEEN LARSEN
  • Patent number: 11996992
    Abstract: Various systems and methods for providing opportunistic placement of compute in an edge network are described herein. A node in an edge network may be configured to access a service level agreement related to a workload, the workload to be orchestrated for a user equipment by the node; modify a machine learning model based on the service level agreement; implement the machine learning model to identify resource requirements to execute the workload in a manner to satisfy the service level agreement; initiate resource assignments from a resource provider, the resource assignments to satisfy the resource requirements; construct a resource hierarchy from the resource assignments; initiate execution of the workload using resources from the resource hierarchy; and monitor and adapt execution of the workload based on the resource hierarchy in response to the execution of the workload.
    Type: Grant
    Filed: June 28, 2022
    Date of Patent: May 28, 2024
    Assignee: Intel Corporation
    Inventors: Ned M. Smith, S M Iftekharul Alam, Satish Chandra Jha, Vesh Raj Sharma Banjade, Christian Maciocco, Kshitij Arun Doshi, Francesc Guim bernat, Nageen Himayat
  • Patent number: 11994997
    Abstract: Systems, apparatuses and methods may provide for a memory controller to manage quality of service enforcement and migration between local and pooled memory. A memory controller may include logic to communicate with a local memory and with a pooled memory controller to track memory page usage on a per application basis, instruct the pooled memory controller to perform a quality of service enforcement in response to a determination that an application is latency bound or bandwidth bound, wherein the determination that the application is latency bound or bandwidth bound is based on a cycles per instruction determination, and instruct a Direct Memory Access engine to perform a migration from a remote memory to the local memory in response to a determination that the quality of service cannot be enforced.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: May 28, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Mark A. Schmisseur
  • Patent number: 11995330
    Abstract: Technologies for providing accelerated functions as a service in a disaggregated architecture include a compute device that is to receive a request for an accelerated task. The task is associated with a kernel usable by an accelerator sled communicatively coupled to the compute device to execute the task. The compute device is further to determine, in response to the request and with a database indicative of kernels and associated accelerator sleds, an accelerator sled that includes an accelerator device configured with the kernel associated with the request. Additionally, the compute device is to assign the task to the determined accelerator sled for execution. Other embodiments are also described and claimed.
    Type: Grant
    Filed: December 17, 2020
    Date of Patent: May 28, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry Mitchel, Rahul Khanna, Slawomir Putyrski, Sujoy Sen, Paul Dormitzer
  • Patent number: 11994932
    Abstract: Methods and apparatus for platform ambient data management schemes for tiered architectures. A platform including one or more CPUs coupled to multiple tiers of memory comprising various types of DIMMs (e.g., DRAM, hybrid, DCPMM) is powered by a battery subsystem receiving input energy harvested from one or more green energy sources. Energy threshold conditions are detected, and associated memory reconfiguration is performed. The memory reconfiguration may include but is not limited to copying data between DIMMs (or memory ranks on the DIMMS in the same tier, copying data between a first type of memory to a second type of memory on a hybrid DIMM, and flushing dirty lines in a DIMM in a first memory tier being used as a cache for a second memory tier. Following data copy and flushing operations, the DIMMs and/or their memory devices are powered down and/or deactivated.
    Type: Grant
    Filed: June 21, 2020
    Date of Patent: May 28, 2024
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Thomas Willhalm, Francesc Guim Bernat
  • Publication number: 20240171657
    Abstract: A computing node in an edge computing network includes a network interface card (NIC), memory storing a plurality of digital object representations of a corresponding plurality of participating entities, and processing circuitry. The processing circuitry detects a message from a participating entity of the plurality. The message is received via the NIC and is associated with a messaging service of the edge computing network. The message is mapped to a service class of a plurality of available service classes based on a service request associated with the message. The message is processed to extract one or more characteristics of the service request. A digital object representation of the plurality of digital object representations is updated based on the one or more characteristics of the service request, the digital object representation corresponding to the participating entity.
    Type: Application
    Filed: June 25, 2021
    Publication date: May 23, 2024
    Inventors: Vesh Raj Sharma Banjade, Kathiravetpillai Sivanesan, Hassnaa Moustafa, Suman A. Sehra, Satish Chandra Jha, Arvind Merwaday, S M Iftekharul Alam, Francesc Guim Bernat, Rajesh Poornachandran, Xin Zhang, Rony Ferzli, Leonardo Gomes Baltar
  • Patent number: 11989587
    Abstract: An apparatus and method for dynamic resource allocation with mile/performance markers.
    Type: Grant
    Filed: June 27, 2020
    Date of Patent: May 21, 2024
    Assignee: Intel Corporation
    Inventors: Rameshkumar Illikkal, Andrew J. Herdrich, Francesc Guim Bernat, Ravishankar Iyer
  • Patent number: 11991054
    Abstract: Methods and apparatus for jitter-less distributed Function as a Service (FaaS) using flavor clustering. A set of FaaS functions clustered by flavor chaining is implemented to deploy one or more FaaS flavor clusters on one or more edge nodes, wherein each flavor is defined by a set of resource requirements mapped into a jitter Quality of Service (QoS) and is executed on at least one hardware computing component on the one or more edge nodes. One or more jitter controllers are implemented to control and monitor execution of FaaS functions in the one or more FaaS flavor clusters such that the functions are executed to meet jitter-less QoS requirements. Jitter controllers include platform jitter-less function controllers in edge nodes and a data center FaaS jitter-less controller. A jitter-less Software Defined Wide Area Network (SD-WAN) network controller is also provided to provide network resources used by FaaS flavor clusters and satisfy connectivity requirements between the edge nodes.
    Type: Grant
    Filed: October 6, 2020
    Date of Patent: May 21, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Ned M. Smith, Sunil Cheruvu, Alexander Bachmutsky, James Coleman
  • Patent number: 11983437
    Abstract: In one embodiment, an apparatus includes: a first queue to store requests that are guaranteed to be delivered to a persistent memory; a second queue to store requests that are not guaranteed to be delivered to the persistent memory; a control circuit to receive the requests and to direct the requests to the first queue or the second queue; and an egress circuit coupled to the first queue to deliver the requests stored in the first queue to the persistent memory even when a power failure occurs. Other embodiments are described and claimed.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: May 14, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Donald Faw, Thomas Willhalm
  • Patent number: 11983408
    Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to allocate a first memory portion to a first application as a combination of a local memory and remote memory, wherein the remote memory is shared between multiple compute nodes, and manage a first memory balloon associated with the first memory portion based on two or more memory tiers associated with the local memory and the remote memory. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: May 3, 2023
    Date of Patent: May 14, 2024
    Assignee: Intel Corporation
    Inventors: Rasika Subramanian, Lidia Warnes, Francesc Guim Bernat, Mark A. Schmisseur, Durgesh Srivastava
  • Publication number: 20240152460
    Abstract: An example disclosed apparatus comprises a trigger monitor to detect an event satisfying a cache scrape trigger rule during execution of a workload, and a cache scraper to scrape cache data from cache in hardware during the execution of the workload.
    Type: Application
    Filed: December 19, 2023
    Publication date: May 9, 2024
    Inventors: John J. Browne, Kshitij Arun Doshi, Thijs Metsch, Francesc Guim Bernat, Adrian Hoban
  • Publication number: 20240143505
    Abstract: Methods and apparatus for dynamic selection of super queue size for CPUs with higher number of cores. An apparatus includes a plurality of compute modules, each module including a plurality of processor cores with integrated first level (L1) caches and a shared second level (L2) cache, a plurality of Last Level Caches (LLCs) or LLC blocks and a plurality of memory interface blocks interconnect via a mesh interconnect. A compute module is configured to arbitrate access to the shared L2 cache and enqueue L2 cache misses in a super queue (XQ). The compute module further is configured to dynamically adjust the size of the XQ during runtime operations. The compute module tracks parameters comprising an L2 miss rate or count and LLC hit latency and adjusts the XQ size as a function of these parameters. A lookup table using the L2 miss rate/count and LLC hit latency may be implemented to dynamically select the XQ size.
    Type: Application
    Filed: December 22, 2023
    Publication date: May 2, 2024
    Inventors: Amruta MISRA, Ajay RAMJI, Rajendrakumar CHINNAIYAN, Chris MACNAMARA, Karan PUTTANNAIAH, Pushpendra KUMAR, Vrinda KHIRWADKAR, Sanjeevkumar Shankrappa ROKHADE, John J. BROWNE, Francesc GUIM BERNAT, Karthik KUMAR, Farheena Tazeen SYEDA
  • Publication number: 20240146639
    Abstract: Systems, apparatus, articles of manufacture, and methods are disclosed to reduce emissions in guided network environments. An apparatus includes interface circuitry, machine readable instructions, and programmable circuitry to at least one of instantiate or execute the machine readable instructions to collect data from respective network nodes corresponding to a request to access information, predict an emission of accessing the information via the respective network nodes using the data, and select a network path including at least one of the network nodes based on the predicted emission.
    Type: Application
    Filed: December 21, 2023
    Publication date: May 2, 2024
    Inventors: Francesc Guim Bernat, Manish Dave, Karthik Kumar, Akhilesh S. Thyagaturu, Matthew Henry Birkner, Adrian Hoban
  • Publication number: 20240143410
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Application
    Filed: January 5, 2024
    Publication date: May 2, 2024
    Applicant: Intel Corporation
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan Custodio, Rahul Khanna, Sujoy Sen
  • Patent number: 11972298
    Abstract: Technologies for migrating data between edge accelerators hosted on different edge locations include a device hosted on a present edge location. The device includes one or more processors to: receive a workload from a requesting device, determine one or more accelerator devices hosted on the present edge location to perform the workload, and transmit the workload to the one or more accelerator devices to process the workload. The one or more processor is further to determine whether to perform data migration from the one or more accelerator devices to one or more different edge accelerator devices hosted on a different edge location, and send, in response to a determination to perform the data migration, a request to the one or more accelerator devices on the present edge location for transformed workload data to be processed by the one or more different edge accelerator devices.
    Type: Grant
    Filed: February 7, 2022
    Date of Patent: April 30, 2024
    Assignee: Intel Corporation
    Inventors: Evan Custodio, Francesc Guim Bernat, Suraj Prabhakaran, Trevor Cooper, Ned M. Smith, Kshitij Doshi, Petar Torre