Patents by Inventor PETAR TORRE

PETAR TORRE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11972298
    Abstract: Technologies for migrating data between edge accelerators hosted on different edge locations include a device hosted on a present edge location. The device includes one or more processors to: receive a workload from a requesting device, determine one or more accelerator devices hosted on the present edge location to perform the workload, and transmit the workload to the one or more accelerator devices to process the workload. The one or more processor is further to determine whether to perform data migration from the one or more accelerator devices to one or more different edge accelerator devices hosted on a different edge location, and send, in response to a determination to perform the data migration, a request to the one or more accelerator devices on the present edge location for transformed workload data to be processed by the one or more different edge accelerator devices.
    Type: Grant
    Filed: February 7, 2022
    Date of Patent: April 30, 2024
    Assignee: Intel Corporation
    Inventors: Evan Custodio, Francesc Guim Bernat, Suraj Prabhakaran, Trevor Cooper, Ned M. Smith, Kshitij Doshi, Petar Torre
  • Publication number: 20230022620
    Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QoS pre-allocation; and automatic QoS migration across edge computing nodes.
    Type: Application
    Filed: July 28, 2022
    Publication date: January 26, 2023
    Inventors: Francesc Guim Bernat, Patrick Bohan, Kshitij Arun Doshi, Brinda Ganesh, Andrew J. Herdrich, Monica Kenguva, Karthik Kumar, Patrick G. Kutch, Felipe Pastor Beneyto, Rashmin Patel, Suraj Prabhakaran, Ned M. Smith, Petar Torre, Alexander Vul
  • Publication number: 20230006889
    Abstract: The present disclosure is generally related to edge computing technologies (ECTs), communications networking, network slicing, and in particular, to techniques and technologies for providing flow-specific network slices. In particular, the present disclosure describes mechanisms that expand existing end-to-end architectures in order to include quality of service and monitoring mechanisms that connect network slicing technologies with infrastructure and/or network data center quality of service provider domains. The described mechanisms provide data center bridging to enable network, edge computing, and cloud computing domains.
    Type: Application
    Filed: August 31, 2022
    Publication date: January 5, 2023
    Inventors: Akhilesh S. Thyagaturu, Francesc Guim Bernat, Xiangyang Zhuang, Karthik Kumar, Petar Torre
  • Patent number: 11412052
    Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QoS pre-allocation; and automatic QoS migration across edge computing nodes.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: August 9, 2022
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Patrick Bohan, Kshitij Arun Doshi, Brinda Ganesh, Andrew J. Herdrich, Monica Kenguva, Karthik Kumar, Patrick G Kutch, Felipe Pastor Beneyto, Rashmin Patel, Suraj Prabhakaran, Ned M. Smith, Petar Torre, Alexander Vul
  • Publication number: 20220237033
    Abstract: Technologies for migrating data between edge accelerators hosted on different edge locations include a device hosted on a present edge location. The device includes one or more processors to: receive a workload from a requesting device, determine one or more accelerator devices hosted on the present edge location to perform the workload, and transmit the workload to the one or more accelerator devices to process the workload. The one or more processor is further to determine whether to perform data migration from the one or more accelerator devices to one or more different edge accelerator devices hosted on a different edge location, and send, in response to a determination to perform the data migration, a request to the one or more accelerator devices on the present edge location for transformed workload data to be processed by the one or more different edge accelerator devices.
    Type: Application
    Filed: February 7, 2022
    Publication date: July 28, 2022
    Inventors: Evan Custodio, Francesc Guim Bernat, Suraj Prabhakaran, Trevor Cooper, Ned M. Smith, Kshitij Doshi, Petar Torre
  • Publication number: 20220166847
    Abstract: Technologies for fulfilling service requests in an edge architecture include an edge gateway device to receive a request from an edge device or an intermediate tier device of an edge network to perform a function of a service by an entity hosting the service. The edge gateway device is to identify one or more input data to fulfill the request by the service and request the one or more input data from an edge resource identified to provide the input data. The edge gateway device is to provide the input data to the entity associated with the request.
    Type: Application
    Filed: December 3, 2021
    Publication date: May 26, 2022
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Petar Torre, Ned Smith, Brinda Ganesh, Evan Custodio, Suraj Prabhakaran
  • Patent number: 11243817
    Abstract: Technologies for migrating data between edge accelerators hosted on different edge locations include a device hosted on a present edge location. The device includes one or more processors to: receive a workload from a requesting device, determine one or more accelerator devices hosted on the present edge location to perform the workload, and transmit the workload to the one or more accelerator devices to process the workload. The one or more processor is further to determine whether to perform data migration from the one or more accelerator devices to one or more different edge accelerator devices hosted on a different edge location, and send, in response to a determination to perform the data migration, a request to the one or more accelerator devices on the present edge location for transformed workload data to be processed by the one or more different edge accelerator devices.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: February 8, 2022
    Assignee: INTEL CORPORATION
    Inventors: Evan Custodio, Francesc Guim Bernat, Suraj Prabhakaran, Trevor Cooper, Ned M. Smith, Kshitij Doshi, Petar Torre
  • Patent number: 11196837
    Abstract: Technologies for fulfilling service requests in an edge architecture include an edge gateway device to receive a request from an edge device or an intermediate tier device of an edge network to perform a function of a service by an entity hosting the service. The edge gateway device is to identify one or more input data to fulfill the request by the service and request the one or more input data from an edge resource identified to provide the input data. The edge gateway device is to provide the input data to the entity associated with the request.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: December 7, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Petar Torre, Ned Smith, Brinda Ganesh, Evan Custodio, Suraj Prabhakaran
  • Publication number: 20190230191
    Abstract: Technologies for fulfilling service requests in an edge architecture include an edge gateway device to receive a request from an edge device or an intermediate tier device of an edge network to perform a function of a service by an entity hosting the service. The edge gateway device is to identify one or more input data to fulfill the request by the service and request the one or more input data from an edge resource identified to provide the input data. The edge gateway device is to provide the input data to the entity associated with the request.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 25, 2019
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Petar Torre, Ned Smith
  • Publication number: 20190227843
    Abstract: Technologies for migrating data between edge accelerators hosted on different edge locations include a device hosted on a present edge location. The device includes one or more processors to: receive a workload from a requesting device, determine one or more accelerator devices hosted on the present edge location to perform the workload, and transmit the workload to the one or more accelerator devices to process the workload. The one or more processor is further to determine whether to perform data migration from the one or more accelerator devices to one or more different edge accelerator devices hosted on a different edge location, and send, in response to a determination to perform the data migration, a request to the one or more accelerator devices on the present edge location for transformed workload data to be processed by the one or more different edge accelerator devices.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 25, 2019
    Inventors: Evan Custodio, Francesc Guim Bernat, Suraj Prabhakaran, Trevor Cooper, Ned M. Smith, Kshitij Doshi, Petar Torre
  • Publication number: 20190158606
    Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QoS pre-allocation; and automatic QoS migration across edge computing nodes.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 23, 2019
    Inventors: FRANCESC GUIM BERNAT, PATRICK BOHAN, KSHITIJ ARUN DOSHI, BRINDA GANESH, ANDREW J. HERDRICH, MONICA KENGUVA, KARTHIK KUMAR, PATRICK G. KUTCH, FELIPE PASTOR BENEYTO, RASHMIN PATEL, SURAJ PRABHAKARAN, NED M. SMITH, PETAR TORRE, ALEXANDER VUL