Patents by Inventor Edwin Verplanke

Edwin Verplanke has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11121957
    Abstract: A device of a service coordinating entity includes communications circuitry to communicate with a plurality of access networks via a corresponding plurality of network function virtualization (NFV) instances, processing circuitry, and a memory device. The processing circuitry is to perform operations to monitor stored performance metrics for the plurality of NFV instances. Each of the NFV instances is instantiated by a corresponding scheduler of a plurality of schedulers on a virtualization infrastructure of the service coordinating entity. A plurality of stored threshold metrics is retrieved, indicating a desired level for each of the plurality of performance metrics. A threshold condition is detected for at least one of the performance metrics for an NFV instance of the plurality of NFV instances, based on the retrieved plurality of threshold metrics. A hardware resource used by the NFV instance to communicate with an access network is adjusted based on the detected threshold condition.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: September 14, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Andrew J. Herdrich, Karthik Kumar, Felipe Pastor Beneyto, Edwin Verplanke, Rashmin Patel, Monica Kenguva, Brinda Ganesh, Alexander Vul, Ned M. Smith
  • Patent number: 11121940
    Abstract: Examples include techniques to meet quality of service (QoS) requirements for a fabric point to point connection. Examples include an application hosted by a compute node coupled with a fabric requesting bandwidth for a point to point connection through the fabric and the request being granted or not granted based at least partially on whether bandwidth is available for allocation to meet one or more QoS requirements.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: September 14, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Andrew Herdrich, Edwin Verplanke
  • Publication number: 20210272467
    Abstract: In one embodiment, an apparatus comprises a memory and a processor. The memory is to store sensor data, wherein the sensor data is captured by a plurality of sensors within an educational environment. The processor is to: access the sensor data captured by the plurality of sensors; identify a student within the educational environment based on the sensor data; detect a plurality of events associated with the student based on the sensor data, wherein each event is indicative of an attention level of the student within the educational environment; generate a report based on the plurality of events associated with the student; and send the report to a third party associated with the student.
    Type: Application
    Filed: September 28, 2018
    Publication date: September 2, 2021
    Inventors: Shao-Wen Yang, Addicam V. Sanjay, Karthik Veeramani, Gabriel L. Silva, Marcos P. Da Silva, Jose A. Avalos, Stephen T. Palermo, Glen J. Anderson, Meng Shi, Benjamin W. Bair, Pete A. Denman, Reese L. Bowes, Rebecca A. Chierichetti, Ankur Agrawal, Mrutunjayya Mrutunjayya, Gerald A. Rogers, Shih-Wei Roger Chien, Lenitra M. Durham, Giuseppe Raffa, Irene Liew, Edwin Verplanke
  • Publication number: 20210216453
    Abstract: Disclosed herein are systems and methods for isolating input/output computing resources. In some embodiments, a host device may include a processor and logic coupled with the processor, to identify a tag identifier (Tag ID) for a process or container of the host device. The Tag ID may identify a queue pair of a hardware device of the host device for an outbound transaction from the processor to the hardware device, to be conducted by the process or container. Logic may further map the Tag ID to a Process Address Space Identifier (PASID) associated with an inbound transaction from the hardware device to the processor that used the identified queue pair. The process or container may use the PASID to conduct the outbound transaction via the identified queue pair. Other embodiments may be disclosed and/or claimed.
    Type: Application
    Filed: March 29, 2021
    Publication date: July 15, 2021
    Inventors: Cunming LIANG, Edwin VERPLANKE, David E. COHEN, Danny Yigang ZHOU
  • Publication number: 20210117244
    Abstract: Examples provide a system that includes one or more processors, that when operational, are to: based on content in a request being within a permitted range for a virtualized execution environment, transfer the request from the virtualized execution environment to reserve one or more device resources independent from causing a virtual machine exit to request to reserve one or more device resources. In some examples, the transfer comprises a write to a register. In some examples, processor-executed microcode is to determine whether content in the request is within a permitted range for the virtualized execution environment.
    Type: Application
    Filed: December 26, 2020
    Publication date: April 22, 2021
    Inventors: Andrew J. HERDRICH, Priya AUTEE, Rajesh M. SANKARAN, Gilbert NEIGER, Scott OEHRLEIN, Michael PRINKE, Ravi IYER, Edwin VERPLANKE
  • Publication number: 20210073129
    Abstract: Examples described herein relate to a manner of demoting multiple cache lines to shared memory. In some examples, a shared cache is accessible by at least two processor cores and a region of the cache is larger than a cache line and is designated for demotion from the cache to the shared cache. In some examples, the cache line corresponds to a memory address in a region of memory. In some examples, an indication that the region of memory is associated with a cache line demote operation is provided in an indicator in a page table entry (PTE). In some examples, the indication that the region of memory is associated with a cache line demote operation is based on a command in an application executed by a processor. In some examples, the cache is an level 1 (L1) or level 2 (L2) cache.
    Type: Application
    Filed: October 30, 2020
    Publication date: March 11, 2021
    Inventors: Rahul R. SHAH, Omkar MASLEKAR, Priya AUTEE, Edwin VERPLANKE, Andrew J. HERDRICH, Jeffrey D. CHAMBERLAIN
  • Patent number: 10936490
    Abstract: Method and apparatus for per-agent control and quality of service of shared resources in a chip multiprocessor platform is described herein. One embodiment of a system includes: a plurality of core and non-core requestors of shared resources, the shared resources to be provided by one or more resource providers, each of the plurality of core and non-core requestors to be associated with a resource-monitoring tag and a resource-control tag; a mapping table to store the resource monitoring and control tags associated with each non-core requestor; and a tagging circuitry to receive a resource request sent from a non-core requestor to a resource provider, the tagging circuitry to responsively modify the resource request to include the resource-monitoring and resource-control tags associated with the non-core requestor in accordance to the mapping table and send the modified resource request to the resource provider.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: March 2, 2021
    Assignee: Intel Corporation
    Inventors: Andrew J. Herdrich, Edwin Verplanke, Stephen R. Van Doren, Ravishankar Iyer, Eric R. Wehage, Rupin H. Vakharwala, Rajesh M. Sankaran, Jeffrey D. Chamberlain, Julius Mandelblat, Yen-Cheng Liu, Stephen T. Palermo, Tsung-Yuan C. Tai
  • Patent number: 10929323
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: February 23, 2021
    Assignee: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Andrew Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs
  • Publication number: 20210042228
    Abstract: Examples provide a system that includes at least one processor; a cache; a memory; an interface to copy data from a received packet to the memory or the at least one cache; and controller to manage use of at least one region of the cache. In some examples, the controller is to: indicate availability of a cache region reservation feature; receive a request to reserve a region of the cache from a requester; and based on the requested region being permitted to be reserved by the requester, solely allow the requester to write data to at least a portion of the reserved region. In some examples, the controller is to write to a register to indicate availability of a cache region reservation feature. In some examples, the request to reserve a region of the cache from a requester comprises a specification of a number of sets, a number of ways, and a class of service.
    Type: Application
    Filed: October 13, 2020
    Publication date: February 11, 2021
    Inventors: Andrew J. HERDRICH, Priya AUTEE, Abhishek KHADE, Patrick LU, Edwin VERPLANKE, Vedvyas SHANBHOGUE
  • Publication number: 20210042146
    Abstract: Systems, methods, and apparatuses for resource monitoring identification reuse are described. In an embodiment, a system comprising a hardware processor core to execute instructions storage for a resource monitoring identification (RMID) recycling instructions to be executed by a hardware processor core, a logical processor to execute on the hardware processor core, the logical processor including associated storage for a RMID and state, are described.
    Type: Application
    Filed: October 22, 2020
    Publication date: February 11, 2021
    Inventors: Matthew FLEMING, Edwin VERPLANKE, Andrew HERDRICH, Ravishankar IYER
  • Publication number: 20210026769
    Abstract: Examples include techniques to support a holistic view of cache class of service (CLOS). Examples include allocating processor cache resources to a plurality of CLOS. The allocation of processor cache resources to include allocation of cache ways for an n-way set of associative cache. Examples include monitoring usage of the plurality of CLOS to determine processor cache resource usage and to report the processor cache resource usage.
    Type: Application
    Filed: June 29, 2018
    Publication date: January 28, 2021
    Inventors: Malini K. BHANDARU, Iosif GASPARAKIS, Sunku RANGANATH, Liyong QIAO, Rui ZANG, Dakshina ILANGOVAN, Shaohe FENG, Edwin VERPLANKE, Priya AUTEE, Lin A. YANG
  • Patent number: 10771554
    Abstract: Disclosed embodiments relate to cloud scaling with non-blocking, non-spinning cross-domain event synchronization and data communication. In an example, a processor includes a memory to store multiple virtual hardware thread (VHTR) descriptors, each including an architectural state, a monitored address range, a priority, and an execution state, fetch circuitry to fetch instructions associated with a plurality of the multiple VNFs, decode circuitry to decode the fetched instructions, scheduling circuitry to allocate and pin a VHTR to each of the plurality of VNFs, schedule execution of a VHTR on each of a plurality of cores, set the execution state of the scheduled VHTR; and in response to a monitor instruction received from a given VHTR, pause the given VHTR and switch in another VHTR to use the core previously used by the given VHTR, and, upon detecting a store to the monitored address range, trigger execution of the given VHTR.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: September 8, 2020
    Assignee: Intel Corporation
    Inventors: Vadim Sukhomlinov, Kshitij A. Doshi, Edwin Verplanke
  • Patent number: 10713195
    Abstract: Embodiments of an invention interrupts between virtual machines are disclosed. In an embodiment, a processor includes an instruction unit and an execution unit, both implemented at least partially in hardware of the processor. The instruction unit is to receive an instruction to send an interrupt to a target virtual machine. The execution unit is to execute the instruction on a sending virtual machine without exiting the sending virtual machine. Execution of the instruction includes using a handle specified by the instruction to find a posted interrupt descriptor.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: July 14, 2020
    Assignee: Intel Corporation
    Inventors: Jr-Shian Tsai, Ravi L Sahita, Mesut A Ergin, Rajesh M Sankaran, Gilbert Neiger, Jun Nakajima, Edwin Verplanke, Barry E Huntley, Tsung-Yuan C Tai
  • Publication number: 20200183729
    Abstract: Methods and apparatus for evolving hypervisor pass-through devices supporting platform independence through a core solution called MUSE (Mdev in User SpacE) that allows mediated pass-through device being served by software running in user space. The MUSE architecture supports platform hardware independence while providing pass-through performance similar to hardware-specific solutions and providing enhanced performance in virtualized environments using existing software components, including various operating systems and associated libraries for implementing SDN (Software Defined Networking) and VNF (Virtualized Network Function).
    Type: Application
    Filed: January 9, 2020
    Publication date: June 11, 2020
    Inventors: Xiuchun Lu, Cunming Liang, Shaopeng He, Nrupal Jani, Anjali Jain, Edwin Verplanke, Parthasarathy Sarangam, Zhirun Yan
  • Publication number: 20200125389
    Abstract: Methods, apparatus, systems and machine-readable storage media of an edge computing device using an edge server CPU with dynamic deterministic scaling is disclosed. A processing circuitry arrangement includes processing circuitry with processor cores operating at a center base frequency and memory. The memory includes instructions configuring the processing circuitry to configure a first set of the processor cores of the CPU to switch the operating at the center base frequency to operating at a first modified base frequency, and a second set of the processor cores to switch the operating at the center base frequency to operating at a second modified base frequency. A same processor core within the first set or the second set can be configured to switch operating between the first modified base frequency or the second modified base frequency.
    Type: Application
    Filed: November 8, 2019
    Publication date: April 23, 2020
    Inventors: Stephen T. Palermo, Nikhil Gupta, Vasudevan Srinivasan, Christopher MacNamara, Sarita Maini, Abhishek Khade, Edwin Verplanke, Lokpraveen Mosur
  • Publication number: 20200042479
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Application
    Filed: October 14, 2019
    Publication date: February 6, 2020
    Applicant: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Andrew Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs
  • Publication number: 20200012514
    Abstract: Systems, methods, and apparatuses for resource monitoring identification reuse are described. In an embodiment, a system comprising a hardware processor core to execute instructions storage for a resource monitoring identification (RMID) recycling instructions to be executed by a hardware processor core, a logical processor to execute on the hardware processor core, the logical processor including associated storage for a RMID and state, are described.
    Type: Application
    Filed: May 9, 2019
    Publication date: January 9, 2020
    Inventors: Matthew FLEMING, Edwin VERPLANKE, Andrew HERDRICH, Ravishankar IYER
  • Publication number: 20190356971
    Abstract: Devices and techniques for out-of-band platform tuning and configuration are described herein. A device can include a telemetry interface to a telemetry collection system and a network interface to network adapter hardware. The device can receive platform telemetry metrics from the telemetry collection system, and network adapter silicon hardware statistics over the network interface, to gather collected statistics. The device can apply a heuristic algorithm using the collected statistics to determine processing core workloads generated by operation of a plurality of software systems communicatively coupled to the device. The device can provide a reconfiguration message to instruct at least one software system to switch operations to a different processing core, responsive to detecting an overload state on at least one processing core, based on the processing core workloads. Other embodiments are also described.
    Type: Application
    Filed: April 22, 2019
    Publication date: November 21, 2019
    Inventors: Andrew J. Herdrich, Patrick L. Connor, Dinesh Kumar, Alexander W. Min, Daniel J. Dahle, Kapil Sood, Jeffrey B. Shaw, Edwin Verplanke, Scott P. Dubal, James Robert Hearn
  • Publication number: 20190340123
    Abstract: Examples provide an application program interface or manner of negotiating locking or pinning or unlocking or unpinning of a cache region by which an application, software, or hardware. A cache region can be part of a level-1, level-2, lower or last level cache (LLC), or translation lookaside buffer (TLB) are locked (e.g., pinned) or unlocked (e.g., unpinned). A cache lock controller can respond to a request to lock or unlock a region of cache or TLB by indicating that the request is successful or not successful. If a request is not successful, the controller can provide feedback indicating one or more aspects of the request that are not permitted. The application, software, or hardware can submit another request, a modified request, based on the feedback to attempt to lock a portion of the cache or TLB.
    Type: Application
    Filed: July 17, 2019
    Publication date: November 7, 2019
    Inventors: Andrew J. HERDRICH, Priya AUTEE, Abhishek KHADE, Patrick LU, Edwin VERPLANKE, Vivekananthan SANJEEPAN
  • Patent number: 10445271
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Grant
    Filed: January 4, 2016
    Date of Patent: October 15, 2019
    Assignee: Intel Corporation
    Inventors: Ren Wang, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Yipeng Wang, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs, Andrew J. Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson