Patents by Inventor Francesc Guim Bernat

Francesc Guim Bernat has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230328547
    Abstract: Embodiments for automatically tuning heterogenous wireless networks are disclosed herein. In one example, performance data is received for multiple wireless networks. The wireless networks are based on multiple wireless technologies, and the performance data is based on multiple layers of the protocol stacks of the wireless technologies. The performance data is used to determine one or more configuration settings to adjust for one or more of the wireless networks. The determined configuration setting(s) are then adjusted.
    Type: Application
    Filed: June 14, 2023
    Publication date: October 12, 2023
    Applicant: Intel Corporation
    Inventors: Mats G. Agerstam, Francesc Guim Bernat, Marcos E. Carranza, Shekar Ramachandran, Rupali Agrahari
  • Publication number: 20230325246
    Abstract: A platform includes a plurality of hardware blocks to provide respective functionality for use in execution of an application. A subset of the plurality of hardware blocks are deactivated and unavailable for use in the execution of the application at the start of the execution of the application. A hardware profile modification block of the platform identifies receives telemetry data generated by a set of sensors and dynamically activates at least a particular one of the subset of hardware blocks based on the physical characteristics, where following activation of the particular hardware block, the execution of the application continues and uses the particular hardware block.
    Type: Application
    Filed: May 31, 2023
    Publication date: October 12, 2023
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, John J. Browne, Amruta Misra, Chris M. MacNamara
  • Publication number: 20230319141
    Abstract: Various systems and methods for providing consensus-based named function execution are described herein. A system is configured to access an interest packet received from a user device, the interest packet including a function name of a function and a data payload; broadcast the interest packet to a plurality of compute nodes, wherein the plurality of compute nodes are configured to execute a respective instance of the function; receive a plurality of responses from the plurality of compute nodes, the plurality of responses including respective results of the execution of the respective instances of the function; analyze the plurality of responses using a consensus protocol to identify a consensus result; and transmit the consensus result to the user device.
    Type: Application
    Filed: June 5, 2023
    Publication date: October 5, 2023
    Inventors: Kshitij Arun Doshi, Francesc Guim Bernat, Sunil Cheruvu, Ned M. Smith, Marcos E. Carranza
  • Publication number: 20230318932
    Abstract: Systems, apparatus, articles of manufacture, and methods are disclosed that direct transmission of data between network-connected devices including circuitry, instructions, and programmable circuitry to at least one of instantiate or execute the instructions to cause the interface circuitry to identify a neural network (NN) to a first device of a first combination of devices corresponding to a first network topology, cause the first device to process first data with a first portion of the NN, and cause a second device of a second combination of devices to process second data with a second portion of the NN, the second combination of devices corresponding to a second network topology.
    Type: Application
    Filed: June 2, 2023
    Publication date: October 5, 2023
    Inventors: Rony Ferzli, Hassnaa Moustafa, Rita Hanna Wouhaybi, Francesc Guim Bernat, Rita Chattopadhyay
  • Patent number: 11768705
    Abstract: Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement.
    Type: Grant
    Filed: October 18, 2021
    Date of Patent: September 26, 2023
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Ned M. Smith, Thomas Willhalm, Timothy Verrall
  • Publication number: 20230281113
    Abstract: Techniques for adaptive memory metadata allocation. A processor may determine a first memory region of a plurality of memory regions in a memory pool coupled to the processor via an interface. The processor may modify a metadata of the first memory region from a first configuration to a second configuration, where the first configuration is associated with a first number of error correction code (ECC) bits and the second configuration is associated with a second number of ECC bits.
    Type: Application
    Filed: April 7, 2023
    Publication date: September 7, 2023
    Applicant: INTEL CORPORATION
    Inventors: Karthik Kumar, Francesc Guim Bernat, Ramamurthy Krithivas
  • Patent number: 11748178
    Abstract: Examples described herein relate to requesting execution of a workload by a next function with data transport overhead tailored based on memory sharing capability with the next function. In some examples, data transport overhead is one or more of: sending a memory address pointer, virtual memory address pointer or sending data to the next function. In some examples, the memory sharing capability with the next function is based on one or more of: whether the next function shares an enclave with a sender function, the next function shares physical memory domain with a sender function, or the next function shares virtual memory domain with a sender function. In some examples, selection of the next function from among multiple instances of the next function based on one or more of: sharing of memory domain, throughput performance, latency, cost, load balancing, or service legal agreement (SLA) requirements.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: September 5, 2023
    Assignee: Intel Corporation
    Inventors: Alexander Bachmutsky, Raghu Kondapalli, Francesc Guim Bernat, Vadim Sukhomlinov
  • Publication number: 20230273597
    Abstract: Telemetry systems for monitoring cooling of compute components and related apparatus and methods are disclosed. An example apparatus includes interface circuitry, machine-readable instructions, and programmable circuitry to at least one of instantiate or execute the machine-readable instructions to generate a heatmap based on outputs of one or more sensors in an environment, the environment including a first compute device, the sensor outputs including a metric associated with a property of a coolant and a location of the sensor in the environment, identify a compute performance metric of the first compute device, determine a cooling parameter for the first compute device based on the heatmap and the compute performance metric, and cause a cooling distribution unit to control flow of the coolant in the environment based on the cooling parameter.
    Type: Application
    Filed: May 8, 2023
    Publication date: August 31, 2023
    Inventors: Francesc Guim Bernat, Amruta Misra, Kshitij Arun Doshi, John J. Browne, Marcos Carranza
  • Publication number: 20230273659
    Abstract: Systems, apparatus, and methods for managing cooling of compute components are disclosed. An example apparatus includes programmable circuitry to at least one of instantiate or execute machine readable instructions to identify a workload to be performed by a compute device, identify a service level objective associated with the workload or the compute device, determine a parameter of a coolant to enable the service level objective to be satisfied during performance of the workload, and cause a cooling distribution unit to control the coolant based on the coolant parameter.
    Type: Application
    Filed: May 8, 2023
    Publication date: August 31, 2023
    Inventors: Francesc Guim Bernat, Kshitij Arun Doshi, Adrian Hoban, Thijs Metsch, John J. Browne
  • Publication number: 20230273821
    Abstract: A method is described. The method includes dispatching jobs across electronic hardware components. The electronic hardware components are to process the jobs. The electronic hardware components are coupled to respective cooling systems. The respective cooling systems are each capable of cooling according to different cooling mechanisms. The different cooling mechanisms have different performance and cost operating realms. The dispatching of the jobs includes assigning the jobs to specific ones of the electronic hardware components to keep the cooling systems operating in one or more of the realms having lower performance and cost than another one of the realms.
    Type: Application
    Filed: April 18, 2023
    Publication date: August 31, 2023
    Inventors: Amruta MISRA, Francesc GUIM BERNAT, Kshitij A. DOSHI, Marcos E. CARRANZA, John J. BROWNE, Arun HODIGERE
  • Publication number: 20230273839
    Abstract: An example apparatus to balance and coordinate power and cooling includes memory; machine-readable instructions; and programmable circuitry to execute the machine-readable instructions to: determine a first service level objective based on a service level agreement and resource data; modify the first service level objective to generate a second service level objective based on the first service level objective and an ambient temperature prediction; determine a resource budget based on a resource usage prediction, the resource budget to identify available resources at a given time; and cause an allocation of cooling resources and power resources for a compute component based on the second service level objective and the resource budget.
    Type: Application
    Filed: May 2, 2023
    Publication date: August 31, 2023
    Inventors: Francesc Guim Bernat, Arun Hodigere, John J. Browne, Henning Schroeder, Kshitij Arun Doshi
  • Patent number: 11743143
    Abstract: Various systems and methods for implementing a service-level agreement (SLA) apparatus receive a request from a requester via a network interface of the gateway, the request comprising an inference model identifier that identifies a handler of the request, and a response time indicator. The response time indicator relates to a time within which the request is to be handled indicates an undefined time within which the request is to be handled. The apparatus determines a network location of a handler that is a platform or an inference model to handle the request consistent with the response time indicator, and routes the request to the handler at the network location.
    Type: Grant
    Filed: June 6, 2022
    Date of Patent: August 29, 2023
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Kshitij Arun Doshi, Suraj Prabhakaran, Raghu Kondapalli, Alexander Bachmutsky
  • Publication number: 20230267004
    Abstract: Various approaches for implementing multi-tenant data protection are described. In an edge computing system deployment, a system includes memory and processing circuitry coupled to the memory. The processing circuitry is configured to obtain a workflow execution plan that includes workload metadata defining a plurality of workloads associated with a plurality of edge service instances executing respectively on one or more edge computing devices. The workload metadata is translated to obtain workload configuration information for the plurality of workloads. The workload configuration information identifies a plurality of memory access configurations and service authorizations identifying at least one edge service instance authorized to access one or more of the memory access configurations. The memory is partitioned into a plurality of shared memory regions using the memory access configurations.
    Type: Application
    Filed: May 1, 2023
    Publication date: August 24, 2023
    Inventors: Kshitij Arun Doshi, Ned M. Smith, Francesc Guim Bernat, Timothy Verrall
  • Patent number: 11736942
    Abstract: A service coordinating entity device includes communications circuitry to communicate with a first access network, processing circuitry, and a memory device. The processing circuitry is to perform operations to, in response to a request for establishing a connection with a user equipment (UE) in a second access network, retrieve a first Trusted Level Agreement (TLA) including trust attributes associated with the first access network. One or more exchanges of the trust attributes of the first TLA and trust attributes of a second TLA associated with the second access network are performed using a computing service executing on the service coordinating entity. A common TLA with trust attributes associated with communications between the first and second access networks is generated based on the exchanges. Data traffic is routed from the first access network to the UE in the second access network based on the trust attributes of the common TLA.
    Type: Grant
    Filed: October 21, 2020
    Date of Patent: August 22, 2023
    Assignee: Intel Corporation
    Inventors: Alexander Bachmutsky, Dario Sabella, Francesc Guim Bernat, John J. Browne, Kapil Sood, Kshitij Arun Doshi, Mats Gustav Agerstam, Ned M. Smith, Rajesh Poornachandran, Tarun Viswanathan
  • Publication number: 20230259102
    Abstract: Methods and apparatus for maintaining the cooling systems of distributed compute systems are disclosed. An example apparatus disclosed herein includes memory, machine readable instructions, and programmable circuitry to at least one of instantiate or execute the machine readable instructions to input operational data into a machine-learning model, the operational data including first information relating to a workload of a server and second information relating to an ambient condition of the server, compare a predicted cooling power requirement for a time period with a predicted cooling power availability for the time period, the predicted cooling power requirement based on an output of the machine-learning model, and generate a cooling plan based on the comparison, the cooling plan to define operation of at least one of the server or a cooling system used to cool the server during the time period.
    Type: Application
    Filed: April 27, 2023
    Publication date: August 17, 2023
    Inventors: Amruta Misra, Francesc Guim Bernat, Arun Hodigere, Kshitij Arun Doshi, John J. Browne
  • Publication number: 20230259185
    Abstract: Methods, systems, apparatus, and articles of manufacture to control cooling in an edge environment are disclosed. An example apparatus disclosed herein includes programmable circuitry to determine whether a first cooling parameter for a first edge node is satisfied based on first cooling availability information for the first edge node, when the first cooling parameter is satisfied, cause a first distribution unit to maintain an amount of cooling fluid to the first edge node, and when the first cooling parameter is not satisfied, cause at least one of the first distribution unit or a second distribution unit to adjust the amount of cooling fluid to at least one of the first edge node or a second edge node based on the first cooling availability information and second cooling availability information, the second cooling availability information for the second edge node.
    Type: Application
    Filed: April 19, 2023
    Publication date: August 17, 2023
    Inventors: Francesc Guim Bernat, Amruta Misra, Arun Hodigere, John J. Browne, Kshitij Arun Doshi
  • Publication number: 20230244560
    Abstract: Methods and apparatus for maintaining the cooling systems of distributed compute systems are disclosed. An example apparatus disclosed herein includes memory, machine-readable instructions, and processor circuitry to execute the machine-readable instructions to determine a health of a server, determine a threshold based on a workload service level agreement associated with the server, and in response to determining the health does not a satisfy the threshold, throttle a workload on the server.
    Type: Application
    Filed: March 29, 2023
    Publication date: August 3, 2023
    Inventors: Francesc Guim Bernat, Amruta Misra, Arun Hodigere, Kshitij Arun Doshi, John J. Browne
  • Publication number: 20230240055
    Abstract: Methods and apparatus to manage noise in computing systems are disclosed. An example server includes a housing to at least partially contain components of the server, a transducer to output an indication of noise detected outside of the housing, machine readable instructions, and programmable circuitry to at least one of instantiate or execute the machine readable instructions to adjust the operation of the server based on the output of the transducer to reduce noise.
    Type: Application
    Filed: March 31, 2023
    Publication date: July 27, 2023
    Inventors: Francesc Guim Bernat, Karthik Kumar, Marcos Carranza
  • Patent number: 11711268
    Abstract: Methods and apparatus to execute a workload in an edge environment are disclosed. An example apparatus includes a node scheduler to accept a task from a workload scheduler, the task including a description of a workload and tokens, a workload executor to execute the workload, the node scheduler to access a result of execution of the workload and provide the result to the workload scheduler, and a controller to access the tokens and distribute at least one of the tokens to at least one provider, the provider to provide a resource to the apparatus to execute the workload.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: July 25, 2023
    Assignee: INTEL CORPORATION
    Inventors: Ned Smith, Francesc Guim Bernat, Sanjay Bakshi, Katalin Bartfai-Walcott, Kapil Sood, Kshitij Doshi, Robert Munoz
  • Patent number: 11706158
    Abstract: Technologies for accelerating edge device workloads at a device edge network include a network computing device which includes a processor platform that includes at least one processor which supports a plurality of non-accelerated function-as-a-service (FaaS) operations and an accelerated platform that includes at least one accelerator which supports a plurality of accelerated FaaS (AFaaS) operation. The network computing device is configured to receive a request to perform a FaaS operation, determine whether the received request indicates that an AFaaS operation is to be performed on the received request, and identify compute requirements for the AFaaS operation to be performed. The network computing device is further configured to select an accelerator platform to perform the identified AFaaS operation and forward the received request to the selected accelerator platform to perform the identified AFaaS operation. Other embodiments are described and claimed.
    Type: Grant
    Filed: October 25, 2021
    Date of Patent: July 18, 2023
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Anil Rao, Suraj Prabhakaran, Mohan Kumar, Karthik Kumar