Patents by Inventor Edwin Verplanke

Edwin Verplanke has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240129353
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to improve webservers using dynamic load balancers. An example method includes identifying a first and second data object type associated with media and with first and second data objects of the media. The example method also includes enqueuing first and second event data associated with the first and second data object in a first and second queue in first circuitry in a die of programmable circuitry. The example method further includes dequeuing the first and second event data into a third and fourth queue associated with a first and second core of the programmable circuitry, the first circuitry separate from the first core and the second core. The example method additionally includes causing the first and second core to execute a first and second computing operation based on the first and second event data in the third and fourth queues.
    Type: Application
    Filed: December 21, 2023
    Publication date: April 18, 2024
    Inventors: Amruta Misra, Niall McDonnell, Mrittika Ganguli, Edwin Verplanke, Stephen Palermo, Rahul Shah, Pushpendra Kumar, Vrinda Khirwadkar, Valerie Parker
  • Publication number: 20230325241
    Abstract: Embodiments for allocating shared resources are disclosed. In an embodiment, an apparatus includes a core and a hardware rate selector. The hardware rate selector is to, in response to a first indication that demand for memory bandwidth from the core has reached a threshold value, determine a delay value to be used to limit allocation of memory bandwidth to the core. The hardware rate selector includes a controller having a first counter to count a second indication of demand for memory bandwidth from the first core and a second counter to count expirations of time windows. The first indication is based on a difference between the first counter value and the second counter value.
    Type: Application
    Filed: September 26, 2020
    Publication date: October 12, 2023
    Applicant: Intel Corporation
    Inventors: Andrew J. HERDRICH, Yen-Cheng LIU, Venkateswara MADDURI, Krishnakumar K. GANAPATHY, Edwin VERPLANKE, Christopher GIANOS, Hanna ALAM, Joseph NUZMAN, Larisa NOVAKOVSKY
  • Patent number: 11695628
    Abstract: Technologies for analyzing and optimizing workloads (e.g., virtual network functions) executing on edge resources are disclosed. According to one embodiment disclosed herein, a compute device launches a virtualized system including a virtual network function and a performance manager, the performance manager to monitor a current resource usage of the virtual network function as a function of a performance profile. The compute device determines, in response to a determination that one or more quality-of-service (QoS) requirements is not satisfied, whether one or more resources from the platform are available for satisfying the QoS requirements. The compute device receives, in response to a determination that the one or more resources are available for satisfying the QoS requirements, the one or more resources and updates the performance profile as a function of the received resources.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: July 4, 2023
    Assignee: Intel Corporation
    Inventors: Rashmin Patel, Monica Kenguva, Francesc Guim Bernat, Edwin Verplanke, Andrew J. Herdrich
  • Patent number: 11650851
    Abstract: Methods, apparatus, systems and machine-readable storage media of an edge computing device using an edge server CPU with dynamic deterministic scaling is disclosed. A processing circuitry arrangement includes processing circuitry with processor cores operating at a center base frequency and memory. The memory includes instructions configuring the processing circuitry to configure a first set of the processor cores of the CPU to switch the operating at the center base frequency to operating at a first modified base frequency, and a second set of the processor cores to switch the operating at the center base frequency to operating at a second modified base frequency. A same processor core within the first set or the second set can be configured to switch operating between the first modified base frequency or the second modified base frequency.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: May 16, 2023
    Assignee: Intel Corporation
    Inventors: Stephen T. Palermo, Nikhil Gupta, Vasudevan Srinivasan, Christopher MacNamara, Sarita Maini, Abhishek Khade, Edwin Verplanke, Lokpraveen Mosur
  • Publication number: 20230082780
    Abstract: Examples described herein include a device interface; a first set of one or more processing units; and a second set of one or more processing units. In some examples, the first set of one or more processing units are to perform heavy flow detection for packets of a flow and the second set of one or more processing units are to perform processing of packets of a heavy flow. In some examples, the first set of one or more processing units and second set of one or more processing units are different. In some examples, the first set of one or more processing units is to allocate pointers to packets associated with the heavy flow to a first set of one or more queues of a load balancer and the load balancer is to allocate the packets associated with the heavy flow to one or more processing units of the second set of one or more processing units based, at least in part on a packet receive rate of the packets associated with the heavy flow.
    Type: Application
    Filed: September 10, 2021
    Publication date: March 16, 2023
    Inventors: Chenmin SUN, Yipeng WANG, Rahul R. SHAH, Ren WANG, Sameh GOBRIEL, Hongjun NI, Mrittika GANGULI, Edwin VERPLANKE
  • Patent number: 11531562
    Abstract: Systems, methods, and apparatuses for resource monitoring identification reuse are described. In an embodiment, a system comprising a hardware processor core to execute instructions storage for a resource monitoring identification (RMID) recycling instructions to be executed by a hardware processor core, a logical processor to execute on the hardware processor core, the logical processor including associated storage for a RMID and state, are described.
    Type: Grant
    Filed: October 22, 2020
    Date of Patent: December 20, 2022
    Assignee: Intel Corporation
    Inventors: Matthew Fleming, Edwin Verplanke, Andrew Herdrich, Ravishankar Iyer
  • Patent number: 11520700
    Abstract: A holistic view of cache class of service (CLOS) to include an allocation of processor cache resources to a plurality of CLOS. The allocation of processor cache resources to include allocation of cache ways for an n-way set of associative cache. Examples include monitoring usage of the plurality of CLOS to determine processor cache resource usage and to report the processor cache resource usage.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: December 6, 2022
    Assignee: Intel Corporation
    Inventors: Malini K. Bhandaru, Iosif Gasparakis, Sunku Ranganath, Liyong Qiao, Rui Zang, Dakshina Ilangovan, Shaohe Feng, Edwin Verplanke, Priya Autee, Lin A. Yang
  • Patent number: 11500681
    Abstract: A compute device includes one or more processors, one or more resources capable of being utilized by the one or more processors, and a platform interconnect to facilitate communication of messages between the one or more processors and the one or more resources. The compute device is to obtain class of service data for one or more workloads to be executed by the compute device. The class of service data is indicative of a capacity of one or more of the resources to be utilized in the execution of each corresponding workload. The compute device is also to execute the one or more workloads and manage the amount of traffic transmitted through the platform interconnect for each corresponding workload as a function of the class of service data as the one or more workloads are executed.
    Type: Grant
    Filed: June 29, 2017
    Date of Patent: November 15, 2022
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Kshitij A. Doshi, Andrew J. Herdrich, Edwin Verplanke, Daniel Rivas Barragan
  • Publication number: 20220286399
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for hardware queue scheduling for multi-core computing environments. An example apparatus includes a first core and a second core of a processor, and circuitry in a die of the processor, at least one of the first core or the second core included in the die, the at least one of the first core or the second core separate from the circuitry, the circuitry to enqueue an identifier to a queue implemented with the circuitry, the identifier associated with a data packet, assign the identifier in the queue to a first core of the processor, and in response to an execution of an operation on the data packet with the first core, provide the identifier to the second core to cause the second core to distribute the data packet.
    Type: Application
    Filed: September 11, 2020
    Publication date: September 8, 2022
    Inventors: Niall McDonnell, Gage Eads, Mrittika Ganguli, Chetan Hiremath, John Mangan, Stephen Palermo, Bruce Richardson, Edwin Verplanke, Praveen Mosur, Bradley Chaddick, Abhishek Khade, Abhirupa Layek, Sarita Maini, Rahul Shah, Shrikant Shah, William Burroughs, David Sonnier
  • Publication number: 20220191602
    Abstract: Devices and techniques for out-of-band platform tuning and configuration are described herein. A device can include a telemetry interface to a telemetry collection system and a network interface to network adapter hardware. The device can receive platform telemetry metrics from the telemetry collection system, and network adapter silicon hardware statistics over the network interface, to gather collected statistics. The device can apply a heuristic algorithm using the collected statistics to determine processing core workloads generated by operation of a plurality of software systems communicatively coupled to the device. The device can provide a reconfiguration message to instruct at least one software system to switch operations to a different processing core, responsive to detecting an overload state on at least one processing core, based on the processing core workloads. Other embodiments are also described.
    Type: Application
    Filed: March 4, 2022
    Publication date: June 16, 2022
    Inventors: Andrew J. Herdrich, Patrick L. Connor, Dinesh Kumar, Alexander W. MIN, Daniel J. DAHLE, Kapil Sood, Jeffrey B. SHAW, Edwin Verplanke, Scott P. Dubal, James Robert Hearn
  • Publication number: 20220182284
    Abstract: Technologies for analyzing and optimizing workloads (e.g., virtual network functions) executing on edge resources are disclosed. According to one embodiment disclosed herein, a compute device launches a virtualized system including a virtual network function and a performance manager, the performance manager to monitor a current resource usage of the virtual network function as a function of a performance profile. The compute device determines, in response to a determination that one or more quality-of-service (QoS) requirements is not satisfied, whether one or more resources from the platform are available for satisfying the QoS requirements. The compute device receives, in response to a determination that the one or more resources are available for satisfying the QoS requirements, the one or more resources and updates the performance profile as a function of the received resources.
    Type: Application
    Filed: October 19, 2021
    Publication date: June 9, 2022
    Inventors: Rashmin Patel, Monica Kenguva, Francesc Guim Bernat, Edwin Verplanke, Andrew J. Herdrich
  • Publication number: 20220155847
    Abstract: Examples described herein relate to circuitry to cause a processor to enter reduced power consumption state and circuitry to, based on a write to one or more of multiple memory regions, cause the processor to exit reduced power consumption state, wherein the multiple memory regions store receive descriptors associated with one or more packets received by a network interface device. In some examples, multiple memory regions are defined by a driver of the network interface device. In some examples, the reduced power consumption state comprises a TPAUSE state.
    Type: Application
    Filed: December 22, 2021
    Publication date: May 19, 2022
    Inventors: Konstantin ANANYEV, Anatoly BURAKOV, David HUNT, Chris MACNAMARA, Edwin VERPLANKE, Omkar MASLEKAR, Gilbert NEIGER, Rajesh M. SANKARAN
  • Publication number: 20220114270
    Abstract: Examples described herein relate to offload circuitry comprising one or more compute engines that are configurable to perform a workload offloaded from a process executed by a processor based on a descriptor particular to the workload. In some examples, the offload circuitry is configurable to perform the workload, among multiple different workloads. In some examples, the multiple different workloads include one or more of: data transformation (DT) for data format conversion, Locality Sensitive Hashing (LSH) for neural network (NN), similarity search, sparse general matrix-matrix multiplication (SpGEMM) acceleration of hash based sparse matrix multiplication, data encode, data decode, or embedding lookup.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Inventors: Ren WANG, Sameh GOBRIEL, Somnath PAUL, Yipeng WANG, Priya AUTEE, Abhirupa LAYEK, Shaman NARAYANA, Edwin VERPLANKE, Mrittika GANGULI, Jr-Shian TSAI, Anton SOROKIN, Suvadeep BANERJEE, Abhijit DAVARE, Desmond KIRKPATRICK
  • Publication number: 20220103530
    Abstract: Examples described herein relate to a network interface device that includes circuitry, configured to perform encryption of data, generate one or more packets from the encrypted data, cause transmission of the one or more packets with the encrypted data, manage reliability of transport of the transmitted one or more packets with the encrypted data, and share protocol state information between a host system and the network interface device using connectivity based on user space accessible queues.
    Type: Application
    Filed: December 7, 2021
    Publication date: March 31, 2022
    Inventors: Daniel DALY, Anjali Singhai JAIN, Yadong LI, Stephen DOYLE, Naru Dames SUNDAR, Chih-Jen CHANG, Sailesh BISSESSUR, Andrew CUNNINGHAM, Edwin VERPLANKE, Patrick FLEMING
  • Patent number: 11272267
    Abstract: Devices and techniques for out-of-band platform tuning and configuration are described herein. A device can include a telemetry interface to a telemetry collection system and a network interface to network adapter hardware. The device can receive platform telemetry metrics from the telemetry collection system, and network adapter silicon hardware statistics over the network interface, to gather collected statistics. The device can apply a heuristic algorithm using the collected statistics to determine processing core workloads generated by operation of a plurality of software systems communicatively coupled to the device. The device can provide a reconfiguration message to instruct at least one software system to switch operations to a different processing core, responsive to detecting an overload state on at least one processing core, based on the processing core workloads. Other embodiments are also described.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: March 8, 2022
    Assignee: Intel Corporation
    Inventors: Andrew J. Herdrich, Patrick L. Connor, Dinesh Kumar, Alexander W. Min, Daniel J. Dahle, Kapil Sood, Jeffrey B. Shaw, Edwin Verplanke, Scott P. Dubal, James Robert Hearn
  • Publication number: 20220045929
    Abstract: A device of a service coordinating entity includes communications circuitry to communicate with a plurality of access networks via a corresponding plurality of network function virtualization (NFV) instances, processing circuitry, and a memory device. The processing circuitry is to perform operations to monitor stored performance metrics for the plurality of NFV instances. Each of the NFV instances is instantiated by a corresponding scheduler of a plurality of schedulers on a virtualization infrastructure of the service coordinating entity. A plurality of stored threshold metrics is retrieved, indicating a desired level for each of the plurality of performance metrics. A threshold condition is detected for at least one of the performance metrics for an NF V instance of the plurality of NFV instances, based on the retrieved plurality of threshold metrics. A hardware resource used by the NFV instance to communicate with an access network is adjusted based on the detected threshold condition.
    Type: Application
    Filed: August 19, 2021
    Publication date: February 10, 2022
    Inventors: Francesc Guim Bernat, Andrew J. Herdrich, Karthik Kumar, Felipe Pastor Beneyto, Edwin Verplanke, Rashmin Patel, Monica Kenguva, Brinda Ganesh, Alexander Vul, Ned M. Smith
  • Publication number: 20220014459
    Abstract: Examples described herein relate to network layer 7 (L7) offload to an infrastructure processing unit (IPU) for a service mesh. An apparatus described herein includes an IPU comprising an IPU memory to store a routing table for a service mesh, the routing table to map shared memory address spaces of the IPU and a host device executing one or more microservices, wherein the service mesh provides an infrastructure layer for the one or more microservices executing on the host device; and one or more IPU cores communicably coupled to the IPU memory, the one or more IPU cores to: host a network L7 proxy endpoint for the service mesh, and communicate messages between the network L7 proxy endpoint and an L7 interface device of the one or more microservices by copying data between the shared memory address spaces of the IPU and the host device based on the routing table.
    Type: Application
    Filed: September 27, 2021
    Publication date: January 13, 2022
    Applicant: Intel Corporation
    Inventors: Mrittika Ganguli, Anjali Jain, Reshma Lal, Edwin Verplanke, Priya Autee, Chih-Jen Chang, Abhirupa Layek, Nupur Jain
  • Publication number: 20210374848
    Abstract: Systems, methods, and apparatuses for resource bandwidth monitoring and control are described. For example, in some embodiments, an apparatus comprising a requestor device to send a credit based request, a receiver device to receive and consume the credit based request, and a delay element in a return path between the requestor and receiver devices, the delay element to delay a credit based response from the receiver to the requestor are detailed.
    Type: Application
    Filed: August 13, 2021
    Publication date: December 2, 2021
    Inventors: Andrew HERDRICH, Edwin VERPLANKE, Ravishankar IYER, Christopher GIANOS, Jeffrey D. CHAMBERLAIN, Ronak SINGH, Julius MANDELBLAT, Bret Toll
  • Patent number: 11171831
    Abstract: Technologies for analyzing and optimizing workloads (e.g., virtual network functions) executing on edge resources are disclosed. According to one embodiment disclosed herein, a compute device launches a virtualized system including a virtual network function and a performance manager, the performance manager to monitor a current resource usage of the virtual network function as a function of a performance profile. The compute device determines, in response to a determination that one or more quality-of-service (QoS) requirements is not satisfied, whether one or more resources from the platform are available for satisfying the QoS requirements. The compute device receives, in response to a determination that the one or more resources are available for satisfying the QoS requirements, the one or more resources and updates the performance profile as a function of the received resources.
    Type: Grant
    Filed: March 30, 2019
    Date of Patent: November 9, 2021
    Assignee: Intel Corporation
    Inventors: Rashmin Patel, Monica Kenguva, Francesc Guim Bernat, Edwin Verplanke, Andrew J. Herdrich
  • Patent number: 11121957
    Abstract: A device of a service coordinating entity includes communications circuitry to communicate with a plurality of access networks via a corresponding plurality of network function virtualization (NFV) instances, processing circuitry, and a memory device. The processing circuitry is to perform operations to monitor stored performance metrics for the plurality of NFV instances. Each of the NFV instances is instantiated by a corresponding scheduler of a plurality of schedulers on a virtualization infrastructure of the service coordinating entity. A plurality of stored threshold metrics is retrieved, indicating a desired level for each of the plurality of performance metrics. A threshold condition is detected for at least one of the performance metrics for an NFV instance of the plurality of NFV instances, based on the retrieved plurality of threshold metrics. A hardware resource used by the NFV instance to communicate with an access network is adjusted based on the detected threshold condition.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: September 14, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Andrew J. Herdrich, Karthik Kumar, Felipe Pastor Beneyto, Edwin Verplanke, Rashmin Patel, Monica Kenguva, Brinda Ganesh, Alexander Vul, Ned M. Smith