Patents by Inventor Marcin Spoczynski

Marcin Spoczynski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11917038
    Abstract: An apparatus is disclosed to compress packets, the apparatus comprising; a data analyzer to identify a new destination address and a protocol identifier of an input packet corresponding to a new destination node and a communication system between the new destination node and a source node; a compression engine to utilize a plurality of compression functions based on the new destination address and the protocol identifier and reduce a size of the input packet; a compression analyzer to identify a reduced packet and a compression function identifier corresponding to the reduced packet, the compression function identifier associated with one of the compression functions; and a source modifier to construct a packet to include the compression function identifier by modifying unregistered values of a protocol identifier by a difference associated with the compression function identifier, the packet to inform the new destination node of a compression function.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: February 27, 2024
    Assignee: Intel Corporation
    Inventors: Michael Nolan, Keith Ellis, Marcin Spoczynski, Michael McGrath, David Coates
  • Patent number: 11824784
    Abstract: Various approaches for implementing platform resource management are described. In an edge computing system deployment, an edge computing device includes processing circuitry coupled to a memory. The processing circuitry is configured to obtain, from an orchestration provider, an SLO (or SLA) that defines usage of an accessible feature of the edge computing device by a container executing on a virtual machine within the edge computing system. A computation model is retrieved based on at least one key performance indicator (KPI) specified in the SLO. The defined usage of the accessible feature is mapped to a plurality of feature controls using the retrieved computation model. The plurality of feature controls is associated with platform resources of the edge computing device that are pre-allocated to the container. The usage of the platform resources allocated to the container is monitored using the plurality of feature controls.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: November 21, 2023
    Assignee: Intel Corporation
    Inventors: Brian Andrew Keating, Marcin Spoczynski, Lokpraveen Mosur, Kshitij Arun Doshi, Francesc Guim Bernat
  • Publication number: 20230362279
    Abstract: System and techniques for orchestrating a service with an interest packet in information centric networking are described herein. The interest packet includes a compound name that includes multiple ICN name components, and the interest packet includes a field with a list of ICN components. A device, after receiving the interest packet, locates an ICN name component from the multiple ICN name components that is represented in the list of ICN components. The device may then select an interface from multiple interfaces available to the device, the selection based on the ICN name component. The device may then transmit the interest packet via the interface.
    Type: Application
    Filed: June 30, 2023
    Publication date: November 9, 2023
    Inventors: Roman Kovalchukov, Nageen Himayat, Srikathyayani Srikanteswara, Yevgeni Koucheryavy, Marcin Spoczynski, YI Zhang, Dmitri Moltchanov, Gabriel Arrobo, Hao Feng
  • Publication number: 20230195538
    Abstract: A device may include a processor that includes a named data network (NDN) forwarding daemon (NFD) module. The NFD module may be configured to identify an accelerator on which to configure an offloaded NFD module based on a configuration table indicating a current configuration of the accelerator. The NFD module may be configured to configure the offloaded NFD module on the identified accelerator. The NFD module may be configured to receive an interest packet that includes a workload request. The interest packet may be configured according to an NDN protocol. The NFD module may be configured to determine that the offloaded NFD module is configured to perform the workload request using the identified accelerator based on the configuration table. The NFD module may be configured to offload, via an application programming interface, the workload request to the offloaded NFD module to perform the workload request using the identified accelerator.
    Type: Application
    Filed: November 17, 2022
    Publication date: June 22, 2023
    Inventors: Marcin SPOCZYNSKI, Hao FENG, Maruti GUPTA HYDE, Nageen HIMAYAT, Srikathyayani SRIKANTESWARA, Yi ZHANG
  • Publication number: 20230195531
    Abstract: A task modeling system, including a plurality of processing clients having a plurality of processing cores; a task modeler, including a memory storing an artificial neural network; and a processor, configured to receive input data representing a plurality of processing tasks to be completed by the processing client within a predefined time duration; and implement its artificial neural network to determine from the input data an assignment of the processing tasks among the processing cores for completion of the processing tasks within the predefined time duration, and determine a power management factor for each of the plurality of processing cores for power management during the predefined time duration; wherein the artificial neural network is configured to select the power management factor for each of the plurality of processing cores to achieve a power usage within a predefined threshold for the plurality of processing cores during the predefined time duration.
    Type: Application
    Filed: December 22, 2021
    Publication date: June 22, 2023
    Inventors: Maruti GUPTA HYDE, Nageen HIMAYAT, Ravikumar BALAKRISHNAN, Mustafa AKDENIZ, Marcin SPOCZYNSKI, Arjun ANAND, Marius ARVINTE
  • Publication number: 20230195536
    Abstract: A device may include a processor. The processor may sample a portion of a workload to offload to compute resources and a portion of a current workload of the compute resources. The processor may simulate offloading of the workload to the compute resources using the sampled portion of the workload, the sampled portion of the current workload, and telemetry data corresponding to the compute resources. The compute resources may be configured to perform the workload according to a plurality of offloading configurations. The processor may determine a rank score for each offloading configuration of the plurality of offloading configurations based on the simulations. Responsive to a rank score corresponding to an offloading configuration of the plurality of offloading configurations exceeding a threshold value, the processor may offload the workload to the compute resource corresponding to the offloading configuration that corresponds to the rank score that exceeds the threshold value.
    Type: Application
    Filed: December 21, 2021
    Publication date: June 22, 2023
    Inventors: Marcin SPOCZYNSKI, Nageen HIMAYAT, Srikathyayani SRIKANTESWARA, Hao FENG, Yi ZHANG, S M Iftekharul ALAM, Rath VANNITHAMBY
  • Patent number: 11637687
    Abstract: Methods, apparatus, systems and articles of manufacture to determine provenance for data supply chains are disclosed. Example instructions cause a machine to at least, in response to data being generated, generate a local data object and object metadata corresponding to the data; hash the local data object; generate a hash of a label of the local data object; generate a hierarchical data structure for the data including the hash of the local data object and the hash of the label of the local data object; generate a data supply chain object including the hierarchical data structure; and transmit the data and the data supply chain object to a device that requested access to the data.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: April 25, 2023
    Assignee: Intel Corporation
    Inventors: Ned Smith, Francesc Guim Bernat, Sanjay Bakshi, Paul O'Neill, Ben McCahill, Brian A. Keating, Adrian Hoban, Kapil Sood, Mona Vij, Nilesh Jain, Rajesh Poornachandran, Trevor Cooper, Kshitij A. Doshi, Marcin Spoczynski
  • Patent number: 11609784
    Abstract: A method for distributing at least one computational process amongst shared resources is proposed. At least two shared resources capable of performing the computational process are determined. According to the method, a workload characteristic for each of the shared resources is predicted. The workload characteristic accounts for at least two subsystems of each shared resource. One of the at least two shared resources is selected based on the predicted workload characteristics.
    Type: Grant
    Filed: April 18, 2018
    Date of Patent: March 21, 2023
    Assignee: Intel Corporation
    Inventors: Thijs Metsch, Leonard Feehan, Annie Ibrahim Rana, Rahul Khanna, Sharon Ruane, Marcin Spoczynski
  • Publication number: 20220224762
    Abstract: In one embodiment, a node of a data centric network (DCN) may receive a first service request interest packet from another node of the DCN, the first service request interest packet indicating a set of functions to be performed on source data to implement a service. The node may determine that it can perform a particular function of the set of functions, and determine, based on a backlog information corresponding to the particular function, whether to commit to performing the particular function or to forward the service request interest packet to another node. The node may make the determination further based on service delivery information indicating, for each face of the node, a service delivery distance for implementing the set of functions.
    Type: Application
    Filed: April 2, 2022
    Publication date: July 14, 2022
    Applicant: Intel Corporation
    Inventors: Hao Feng, Yi Zhang, Srikathyayani Srikanteswara, Marcin Spoczynski, Nageen Himayat, Alexander Bachmutsky, Maruti Gupta Hyde
  • Publication number: 20220014450
    Abstract: System and techniques for storage node recruitment in an information centric network (ICN) are described herein. An ICN node receives a storage interest packet that includes an indication differentiating the storage interest from other ICN interests. The ICN node forwards the storage interest packet and receives a storage data packet in response. Here, the storage data packet includes an indication that the storage data packet is not to be cached along with node information for a node that created the storage data packet. The ICN node may then transmit the storage data packet in accordance with a pending interest table (PIT) entry corresponding to the storage interest packet.
    Type: Application
    Filed: September 24, 2021
    Publication date: January 13, 2022
    Inventors: Srikathyayani Srikanteswara, Marcin Spoczynski, Yi Zhang, Hao Feng, Kshitij Arun Doshi, Francesc Guim Bernat, Ned M. Smith, Satish Chandra Jha, S M Iftekharul Alam, Vesh Raj Sharma Banjade, Alexander Bachmutsky, Nageen Himayat
  • Publication number: 20220014462
    Abstract: System and techniques for geographic routing are described herein. A node receives a data packet that includes map data, a sequence of geographic areas to a requestor, and a target geographic area. The node may then determine that it is within the target geographic area and start a transmit timer based on a next-hop geographic area. Here, the next-hop geographic area is determined from the sequence of geographic areas in the data packet. The node may then counts how many other nodes from the geographic area sent data packets while the transmit timer is running. When the transmit timer expires, the node may transmit a modified data packet when the number of data packets is less than a predefined threshold. Here, the modified data packet is the data packet updated to include local map data and the next-hop geographic area.
    Type: Application
    Filed: September 24, 2021
    Publication date: January 13, 2022
    Inventors: Hao Feng, Srikathyayani Srikanteswara, Yi Zhang, Satish Chandra Jha, S M Iftekharul Alam, Marcin Spoczynski, Alexander Bachmutsky, Nageen Himayat, Ned M. Smith, Kshitij Arun Doshi, Francesc Guim Bernat
  • Patent number: 11218546
    Abstract: A non-transitory computer-readable storage medium, an apparatus, and a computer-implemented method to select respective physical infrastructure devices of an edge computing system to implement services requested by respective service-requesting clients. The computer-readable storage medium includes computer-readable instructions that, when executed, cause at least one processor to perform operations comprising, for each candidate physical infrastructure device, calculating a utility score corresponding to each of the services requested, wherein: the utility score corresponds to one of each of the respective service-requesting clients or each subgroup of a plurality of subgroups of the respective service-requesting clients.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: January 4, 2022
    Assignee: Intel Corporation
    Inventors: Marcin Spoczynski, Michael Nolan, Keith A. Ellis, Radhika Loomba
  • Publication number: 20210377366
    Abstract: An apparatus is disclosed to compress packets, the apparatus comprising; a data analyzer to identify a new destination address and a protocol identifier of an input packet corresponding to a new destination node and a communication system between the new destination node and a source node; a compression engine to utilize a plurality of compression functions based on the new destination address and the protocol identifier and reduce a size of the input packet; a compression analyzer to identify a reduced packet and a compression function identifier corresponding to the reduced packet, the compression function identifier associated with one of the compression functions; and a source modifier to construct a packet to include the compression function identifier by modifying unregistered values of a protocol identifier by a difference associated with the compression function identifier, the packet to inform the new destination node of a compression function.
    Type: Application
    Filed: June 10, 2021
    Publication date: December 2, 2021
    Inventors: Michael Nolan, Keith Ellis, Marcin Spoczynski, Michael McGrath, David Coates
  • Patent number: 11159609
    Abstract: A non-transitory computer-readable storage medium, an apparatus, and a computer-implemented method. The computer-readable storage medium is of an edge computing system and is to identify a target edge node for deployment of a workload thereon. The computer-readable storage medium further comprises computer-readable instructions that, when executed, cause at least one processor to perform operations comprising: determining whether respective ones of candidate target edge nodes of a set of candidate target edge nodes of the edge computing system support workload determinism key performance indicators (KPIs) of the workload; in response to a determination that one or more candidate target edge nodes support the workload determinism KPIs, selecting a target edge node from the one or more candidate edge nodes; and causing the workload to be deployed at the target edge node.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: October 26, 2021
    Assignee: Intel Corporation
    Inventors: Michael J. McGrath, Daire Healy, Christopher D. Lucero, Marcin Spoczynski
  • Publication number: 20210328941
    Abstract: Example methods, apparatus, systems and articles of manufacture (e.g., physical storage media) to change a time sensitive networking schedule implemented by a softswitch are disclosed. Example apparatus disclosed herein to change a time sensitive networking schedule implemented by a first softswitch on a compute node include a network node configurator to deploy a second softswitch on the compute node based on a first configuration specification associated with the first softswitch, configure the second softswitch to implement an updated time sensitive networking schedule different from the time sensitive networking schedule implemented by the first softswitch, and replace the first softswitch with the second softswitch in response to a determination that a first set of constraints is met for simulated network traffic processed by the second softswitch based on the updated time sensitive networking schedule.
    Type: Application
    Filed: April 13, 2021
    Publication date: October 21, 2021
    Inventors: Michael McGrath, Michael Nolan, Marcin Spoczynski, Dáire Healy, Stiofáin Fordham
  • Publication number: 20210188306
    Abstract: A method of implementing distributed AI or ML learning for autonomous vehicles is disclosed. An AI or ML model specific to a location or a type of the location is generated at a vehicle. In response to a detection that the vehicle is within a proximity to a road side unit (RSU) associated with the location or the type of the location or the vehicle is within a proximity to an additional vehicle that is present or anticipated to be present at the location or the type of the location, causing an AI or ML model transmission to the additional vehicle or the RSU. Based on the causing the AI or ML model reception, causing deployment of an additional AI or ML model in the vehicle to optimize the vehicle for the location or the type of the location.
    Type: Application
    Filed: December 23, 2020
    Publication date: June 24, 2021
    Inventors: Nageen Himayat, Maruti Gupta Hyde, Ravikumar Balakrishnan, Mustafa Akdeniz, Marcin Spoczynski
  • Patent number: 11038990
    Abstract: An apparatus is disclosed to compress packets, the apparatus comprising; a data analyzer to identify a new destination address and a protocol identifier of an input packet corresponding to a new destination node and a communication system between the new destination node and a source node; a compression engine to utilize a plurality of compression functions based on the new destination address and the protocol identifier and reduce a size of the input packet; a compression analyzer to identify a reduced packet and a compression function identifier corresponding to the reduced packet, the compression function identifier associated with one of the compression functions; and a source modifier to construct a packet to include the compression function identifier by modifying unregistered values of a protocol identifier by a difference associated with the compression function identifier, the packet to inform the new destination node of a compression function.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: June 15, 2021
    Assignee: Intel Corporation
    Inventors: Michael Nolan, Keith Ellis, Marcin Spoczynski, Michael McGrath, David Coates
  • Patent number: 11012365
    Abstract: Example methods, apparatus, systems and articles of manufacture (e.g., physical storage media) to change a time sensitive networking schedule implemented by a softswitch are disclosed. Example apparatus disclosed herein to change a time sensitive networking schedule implemented by a first softswitch on a compute node include a network node configurator to deploy a second softswitch on the compute node based on a first configuration specification associated with the first softswitch, configure the second softswitch to implement an updated time sensitive networking schedule different from the time sensitive networking schedule implemented by the first softswitch, and replace the first softswitch with the second softswitch in response to a determination that a first set of constraints is met for simulated network traffic processed by the second softswitch based on the updated time sensitive networking schedule.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: May 18, 2021
    Assignee: Intel Corporation
    Inventors: Michael McGrath, Michael Nolan, Marcin Spoczynski, Dáire Healy, Stiofáin Fordham
  • Publication number: 20210014133
    Abstract: Methods and apparatus to coordinate edge platforms are disclosed. A disclosed example apparatus includes to control processing of data associated with edges includes an orchestrator analyzer to determine a first performance requirement of a first microservice of an application and a second performance requirement of a second microservice of the application. The apparatus also includes an orchestrator controller to assign the first microservice and the second microservice across first and second edge nodes between a source network and a destination network by: assigning the first microservice to the first edge node based on a first capability of the first edge node satisfying the first performance requirement of the first microservice, and assigning the second microservice to the second edge node based on a second capability of the second edge node satisfying the second performance requirement of the second microservice.
    Type: Application
    Filed: September 25, 2020
    Publication date: January 14, 2021
    Inventors: Christian Maciocco, Kshitij Doshi, Francesc Guim Bernat, Ned M. Smith, Marcin Spoczynski, Timothy Verrall, Rajesh Gadiyar, Trevor Cooper, Valerie Parker
  • Publication number: 20200296155
    Abstract: A non-transitory computer-readable storage medium, an apparatus, and a computer-implemented method. The computer-readable storage medium is of an edge computing system and is to identify a target edge node for deployment of a workload thereon. The computer-readable storage medium further comprises computer-readable instructions that, when executed, cause at least one processor to perform operations comprising: determining whether respective ones of candidate target edge nodes of a set of candidate target edge nodes of the edge computing system support workload determinism key performance indicators (KPIs) of the workload; in response to a determination that one or more candidate target edge nodes support the workload determinism KPIs, selecting a target edge node from the one or more candidate edge nodes; and causing the workload to be deployed at the target edge node.
    Type: Application
    Filed: March 27, 2020
    Publication date: September 17, 2020
    Applicant: Intel Corporation
    Inventors: Michael J. McGrath, Daire Healy, Christopher D. Lucero, Marcin Spoczynski