Patents by Inventor Hugh Wilkinson

Hugh Wilkinson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11849013
    Abstract: Techniques for embedding fabric addressing information within Ethernet media access control (MAC) addresses is disclosed herein and allows a multi-node fabric having potentially millions of nodes to feature Ethernet encapsulation without the necessity of a lookup or map to translate MAC addresses to fabric-routable local identifiers (LIDs). In particular, a locally-administered MAC address may be encoded with fabric addressing information including a LID. Thus a node may exchange Ethernet packets using a multi-node fabric by encapsulating each Ethernet packet with a destination MAC address corresponding to an intended destination. As the destination MAC address may implicitly map to a LID of the multi-node fabric, the node may use an extracted LID value therefrom to address a fabric-routable packet. To this end, a node may introduce a fabric-routable packet encapsulating an Ethernet packet onto a multi-node fabric without necessarily performing a lookup to map a MAC address to a corresponding LID.
    Type: Grant
    Filed: December 26, 2019
    Date of Patent: December 19, 2023
    Assignee: Intel Corporation
    Inventors: Hugh Wilkinson, James C. Wright
  • Publication number: 20230044342
    Abstract: Examples described herein relate to dynamically adjust a manner of identifying hot pages in a remote memory pool based on adjustment of parameters of a data structure. In some examples, the parameters of the data structure include a range of number of access counts and a number of pages associated with the range.
    Type: Application
    Filed: September 30, 2022
    Publication date: February 9, 2023
    Inventor: Hugh WILKINSON
  • Publication number: 20220334963
    Abstract: Examples described herein relate to circuitry, when operational, configured to: store records of memory accesses to a memory device by at least one requester based on a configuration, wherein the configuration is to specify a duration of memory access capture. In some examples, the at least one requester comprises one or more workloads running on one or more processors. In some examples, the configuration is to specify collection of one or more of: physical address ranges or read or write access type.
    Type: Application
    Filed: June 24, 2022
    Publication date: October 20, 2022
    Inventors: Ankit PATEL, Lidia WARNES, Donald L. FAW, Bassam N. COURY, Douglas CARRIGAN, Hugh WILKINSON, Ananthan AYYASAMY, Michael F. FALLON
  • Patent number: 11134021
    Abstract: Techniques and apparatus for processor queue management are described. In one embodiment, for example, an apparatus to provide queue congestion management assistance may include at least one memory and logic for a queue manager, at least a portion of the logic comprised in hardware coupled to the at least one memory, the logic to determine queue information for at least one queue element (QE) queue storing at least one QE, compare the queue information to at least one queue threshold value, and generate a queue notification responsive to the queue information being outside of the queue threshold value. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: September 28, 2021
    Assignee: INTEL CORPORATION
    Inventors: Jonathan Kenny, Niall D. McDonnell, Andrew Cunningham, Debra Bernstein, William G. Burroughs, Hugh Wilkinson
  • Patent number: 11063884
    Abstract: This disclosure describes enhancements to Ethernet for use in higher performance applications like Storage, HPC, and Ethernet based fabric interconnects. This disclosure provides various mechanisms for lossless fabric enhancements with error-detection and retransmissions to improve link reliability, frame pre-emption to allow higher priority traffic over lower priority traffic, virtual channel support for deadlock avoidance by enhancing Class of service functionality defined in IEEE 802.1Q, a new header format for efficient forwarding/routing in the fabric interconnect and header CRC for reliable cut-through forwarding in the fabric interconnect. The enhancements described herein, when added to standard and/or proprietary Ethernet protocols, broadens the applicability of Ethernet to newer usage models and fabric interconnects that are currently served by alternate fabric technologies like Infiniband, Fibre Channel and/or other proprietary technologies, etc.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: July 13, 2021
    Assignee: Intel Corporation
    Inventors: Ilango Ganga, Alain Gravel, Thomas Lovett, Radia Perlman, Greg Regnier, Anil Vasudevan, Hugh Wilkinson
  • Publication number: 20210209035
    Abstract: Examples described herein relate to an apparatus that includes at least two processing units and a memory hub coupled to the at least two processing units. In some examples, the memory hub includes a home agent. In some examples, the memory hub is to perform a memory access request involving a memory device, a first processing unit among the at least two processing units is to send the memory access request to the memory hub. In some examples, the first processing unit is to offload at least some but not all home agent operations to the home agent of the memory hub. In some examples, the first processing unit comprises a second home agent and wherein the second home agent is to perform the at least some but not all home agent operations before the offload of at least some but not all home agent operations to the home agent of the memory hub.
    Type: Application
    Filed: March 25, 2021
    Publication date: July 8, 2021
    Inventors: Duane E. GALBI, Matthew J. ADILETTA, Hugh WILKINSON, Patrick CONNOR
  • Publication number: 20210200667
    Abstract: Examples described herein relate to memory thin provisioning in a memory pool of one or more dual in-line memory modules or memory devices. At any instance, any central processing unit (CPU) can request and receive a full virtual allocation of memory in an amount that exceeds the physical memory attached to the CPU (near memory). A remote pool of additional memory can be dynamically utilized to fill the gap between allocated memory and near memory. This remote pool is shared between multiple CPUs, with dynamic assignment and address re-mapping provided for the remote pool. To improve performance, the near memory can be operated as a cache of the pool memory. Inclusive or exclusive content storage configurations can be applied. An inclusive cache configuration can include an entry in a near memory cache also being stored in a memory pool whereas an exclusive cache configuration can provide an entry in either a near memory cache or in a memory pool but not both.
    Type: Application
    Filed: December 26, 2019
    Publication date: July 1, 2021
    Inventors: Debra BERNSTEIN, Hugh WILKINSON, Douglas CARRIGAN, Bassam N. COURY, Matthew J. ADILETTA, Durgesh SRIVASTAVA, Lidia WARNES, William WHEELER, Michael F. FALLON
  • Patent number: 10929323
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: February 23, 2021
    Assignee: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Andrew Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs
  • Patent number: 10761979
    Abstract: A processor of an aspect includes a register to store a condition code bit, and a decode unit to decode a bit check instruction. The bit check instruction is to indicate a first source operand that is to include a first bit, and is to indicate a check bit value for the first bit. The processor also includes an execution unit coupled with the decode unit. The execution unit, in response to the bit check instruction, is to compare the first bit with the check bit value, and update a condition code bit to indicate whether the first bit equals or does not equal the check bit value. Other processors, methods, systems, and instructions are disclosed.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: September 1, 2020
    Assignee: Intel Corporation
    Inventors: Hugh Wilkinson, William R. Wheeler, Debra Bernstein
  • Publication number: 20200145519
    Abstract: Techniques for embedding fabric addressing information within Ethernet media access control (MAC) addresses is disclosed herein and allows a multi-node fabric having potentially millions of nodes to feature Ethernet encapsulation without the necessity of a lookup or map to translate MAC addresses to fabric-routable local identifiers (LIDs). In particular, a locally-administered MAC address may be encoded with fabric addressing information including a LID. Thus a node may exchange Ethernet packets using a multi-node fabric by encapsulating each Ethernet packet with a destination MAC address corresponding to an intended destination. As the destination MAC address may implicitly map to a LID of the multi-node fabric, the node may use an extracted LID value therefrom to address a fabric-routable packet. To this end, a node may introduce a fabric-routable packet encapsulating an Ethernet packet onto a multi-node fabric without necessarily performing a lookup to map a MAC address to a corresponding LID.
    Type: Application
    Filed: December 26, 2019
    Publication date: May 7, 2020
    Applicant: INTEL CORPORATION
    Inventors: HUGH WILKINSON, JAMES C. WRIGHT
  • Publication number: 20200125495
    Abstract: An apparatus is described. The apparatus includes a semiconductor chip package. The semiconductor chip package includes an SOC. The SOC has a memory controller. The semiconductor chip package includes an interface to an external memory. The semiconductor chip package includes a memory side cache. The memory side cache is composed of eDRAM and is coupled between the memory controller and the interface to the external memory. The eDRAM is to cache more frequently used items of the external memory. The semiconductor chip package has an out-of-order interface between the memory controller and the memory side cache.
    Type: Application
    Filed: December 19, 2019
    Publication date: April 23, 2020
    Inventors: Duane E. GALBI, Bradley A. BURRES, Matthew J. ADILETTA, Hugh WILKINSON, Aaron GORIUS
  • Publication number: 20200042479
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Application
    Filed: October 14, 2019
    Publication date: February 6, 2020
    Applicant: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Andrew Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs
  • Patent number: 10541941
    Abstract: Apparatus and methods for cableless connection of components within chassis and between separate chassis. Pairs of Extremely High Frequency (EHF) transceiver chips supporting very short length millimeter-wave wireless communication links are configured to pass radio frequency signals through holes in one or more metal layers in separate chassis and/or frames, enabling components in the separate chassis to communicate without requiring cables between the chassis. Various configurations are disclosed, including multiple configurations for server chassis, storage chassis and arrays, and network/switch chassis. The EHF-based wireless links support link bandwidths of up to 6 gigabits per second, and may be aggregated to facilitate multi-lane links.
    Type: Grant
    Filed: April 6, 2018
    Date of Patent: January 21, 2020
    Assignee: Intel Corporation
    Inventors: Matthew J. Adiletta, Aaron Gorius, Myles Wilde, Hugh Wilkinson, Amit Y. Kumar
  • Patent number: 10523796
    Abstract: Techniques for embedding fabric addressing information within Ethernet media access control (MAC) addresses is disclosed herein and allows a multi-node fabric having potentially millions of nodes to feature Ethernet encapsulation without the necessity of a lookup or map to translate MAC addresses to fabric-routable local identifiers (LIDs). In particular, a locally-administered MAC address may be encoded with fabric addressing information including a LID. Thus a node may exchange Ethernet packets using a multi-node fabric by encapsulating each Ethernet packet with a destination MAC address corresponding to an intended destination. As the destination MAC address may implicitly map to a LID of the multi-node fabric, the node may use an extracted LID value therefrom to address a fabric-routable packet. To this end, a node may introduce a fabric-routable packet encapsulating an Ethernet packet onto a multi-node fabric without necessarily performing a lookup to map a MAC address to a corresponding LID.
    Type: Grant
    Filed: December 22, 2015
    Date of Patent: December 31, 2019
    Assignee: Intel Corporation
    Inventors: Hugh Wilkinson, James C. Wright
  • Publication number: 20190386934
    Abstract: This disclosure describes enhancements to Ethernet for use in higher performance applications like Storage, HPC, and Ethernet based fabric interconnects. This disclosure provides various mechanisms for lossless fabric enhancements with error-detection and retransmissions to improve link reliability, frame pre-emption to allow higher priority traffic over lower priority traffic, virtual channel support for deadlock avoidance by enhancing Class of service functionality defined in IEEE 802.1Q, a new header format for efficient forwarding/routing in the fabric interconnect and header CRC for reliable cut-through forwarding in the fabric interconnect. The enhancements described herein, when added to standard and/or proprietary Ethernet protocols, broadens the applicability of Ethernet to newer usage models and fabric interconnects that are currently served by alternate fabric technologies like Infiniband, Fibre Channel and/or other proprietary technologies, etc.
    Type: Application
    Filed: August 28, 2019
    Publication date: December 19, 2019
    Applicant: Intel Corporation
    Inventors: Ilango Ganga, Alain Gravel, Thomas Lovett, Radia Perlman, Greg Regnier, Anil Vasudevan, Hugh Wilkinson
  • Patent number: 10445271
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Grant
    Filed: January 4, 2016
    Date of Patent: October 15, 2019
    Assignee: Intel Corporation
    Inventors: Ren Wang, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Yipeng Wang, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs, Andrew J. Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson
  • Patent number: 10404625
    Abstract: This disclosure describes enhancements to Ethernet for use in higher performance applications like Storage, HPC, and Ethernet based fabric interconnects. This disclosure provides various mechanisms for lossless fabric enhancements with error-detection and retransmissions to improve link reliability, frame pre-emption to allow higher priority traffic over lower priority traffic, virtual channel support for deadlock avoidance by enhancing Class of service functionality defined in IEEE 802.1Q, a new header format for efficient forwarding/routing in the fabric interconnect and header CRC for reliable cut-through forwarding in the fabric interconnect. The enhancements described herein, when added to standard and/or proprietary Ethernet protocols, broadens the applicability of Ethernet to newer usage models and fabric interconnects that are currently served by alternate fabric technologies like Infiniband, Fiber Channel and/or other proprietary technologies, etc.
    Type: Grant
    Filed: September 25, 2014
    Date of Patent: September 3, 2019
    Assignee: Intel Corporation
    Inventors: Ilango Ganga, Alain Gravel, Thomas Lovett, Radia Perlman, Greg Regnier, Anil Vasudevan, Hugh Wilkinson
  • Patent number: 10374726
    Abstract: Apparatus and methods for rack level pre-installed interconnect for enabling cableless server, storage, and networking deployment. Plastic cable waveguides are configured to couple millimeter-wave radio frequency (RF) signals between two or more Extremely High Frequency (EHF) transceiver chips, thus supporting millimeter-wave wireless communication links enabling components in the separate chassis to communicate without requiring wire or optical cables between the chassis. Various configurations are disclosed, including multiple configurations for server chassis, storage chassis and arrays, and network/switch chassis. A plurality of plastic cable waveguide may be coupled to applicable support/mounting members, which in turn are mounted to a rack and/or top-of-rack switches. This enables the plastic cable waveguides to be pre-installed at the rack level, and further enables racks to be installed and replaced without requiring further cabling for the supported communication links.
    Type: Grant
    Filed: January 18, 2018
    Date of Patent: August 6, 2019
    Assignee: Intel Corporation
    Inventors: Matthew J. Adiletta, Aaron Gorius, Myles Wilde, Hugh Wilkinson, Amit Y. Kumar
  • Patent number: 10216668
    Abstract: Technologies for a distributed hardware queue manager include a compute device having a processor. The processor includes two or more hardware queue managers as well as two or more processor cores. Each processor core can enqueue or dequeue data from the hardware queue manager. Each hardware queue manager can be configured to contain several queue data structures. In some embodiments, the queues are addressed by the processor cores using virtual queue addresses, which are translated into physical queue addresses for accessing the corresponding hardware queue manager. The virtual queues can be moved from one physical queue in one hardware queue manager to a different physical queue in a different physical queue manager without changing the virtual address of the virtual queue.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: February 26, 2019
    Assignee: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Jr-Shian Tsai, Andrew Herdrich, Tsung-Yuan Tai, Niall McDonnell, Stephen Van Doren, David Sonnier, Debra Bernstein, Hugh Wilkinson, Narender Vangati, Stephen Miller, Gage Eads, Andrew Cunningham, Jonathan Kenny, Bruce Richardson, William Burroughs, Joseph Hasting, An Yan, James Clee, Te Ma, Jerry Pirog, Jamison Whitesell
  • Patent number: 10205667
    Abstract: One embodiment provides a method for enabling class-based credit flow control for a network node in communication with a link partner using an Ethernet communications protocol. The method includes receiving a control frame from the link partner. The control frame includes at least one field for specifying credit for at least one traffic class and the credit is based on available space in a receive buffer associated with the at least one traffic class. The method further includes sending data packets to the link partner based on the credit, the data packets associated with the at least one traffic class.
    Type: Grant
    Filed: June 5, 2017
    Date of Patent: February 12, 2019
    Assignee: Intel Corporation
    Inventors: Ilango Ganga, Alain Gravel, Thomas D. Lovett, Radia Perlman, Greg Regnier, Anil Vasudevan, Hugh Wilkinson