Patents by Inventor Seth Pugsley

Seth Pugsley has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240111679
    Abstract: Techniques for prefetching by a hardware processor are described. In certain examples, a hardware processor includes execution circuitry, cache memories, and prefetcher circuitry. The execution circuitry is to execute instructions to access data at a memory address. The cache memories include a first cache memory at a first cache level and a second cache memory at a second cache level. The prefetcher circuitry is to prefetch the data from a system memory to at least one of the plurality of cache memories, and it includes a first-level prefetcher to prefetch the data to the first cache memory, a second-level prefetcher to prefetch the data to the second cache memory, and a plurality of prefetch filters. One of the prefetch filters is to filter exclusively for the first-level prefetcher. Another of the prefetch filters is to maintain a history of demand and prefetch accesses to pages in the system memory and to use the history to provide training information to the second-level prefetcher.
    Type: Application
    Filed: October 1, 2022
    Publication date: April 4, 2024
    Applicant: Intel Corporation
    Inventors: Seth Pugsley, Mark Dechene, Ryan Carlson, Manjunath Shevgoor
  • Patent number: 11593623
    Abstract: System configurations and techniques for implementation of a neural network in neuromorphic hardware with use of external memory resources are described herein. In an example, a system for processing spiking neural network operations includes: a plurality of neural processor clusters to maintain neurons of the neural network, with the clusters including circuitry to determine respective states of the neurons and internal memory to store the respective states of the neurons; and a plurality of axon processors to process synapse data of synapses in the neural network, with the processors including circuitry to retrieve synapse data of respective synapses from external memory, evaluate the synapse data based on a received spike message, and propagate another spike message to another neuron based on the synapse data. Further details for use and access of the external memory and processing configurations for such neural network operations are also disclosed.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: February 28, 2023
    Assignee: Intel Corporation
    Inventors: Berkin Akin, Seth Pugsley
  • Patent number: 11366998
    Abstract: Systems and techniques for neuromorphic accelerator multitasking are described herein. A neuron address translation unit (NATU) may receive a spike message. Here, the spike message includes a physical neuron identifier (PNID) of a neuron causing the spike. The NATU may then translate the PNID into a network identifier (NID) and a local neuron identifier (LNID). The NATU locates synapse data based on the NID and communicates the synapse data and the LNID to an axon processor.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: June 21, 2022
    Assignee: Intel Corporation
    Inventors: Seth Pugsley, Berkin Akin
  • Publication number: 20190042920
    Abstract: System configurations and techniques for implementation of a neural network in neuromorphic hardware with use of external memory resources are described herein. In an example, a system for processing spiking neural network operations includes: a plurality of neural processor clusters to maintain neurons of the neural network, with the clusters including circuitry to determine respective states of the neurons and internal memory to store the respective states of the neurons; and a plurality of axon processors to process synapse data of synapses in the neural network, with the processors including circuitry to retrieve synapse data of respective synapses from external memory, evaluate the synapse data based on a received spike message, and propagate another spike message to another neuron based on the synapse data. Further details for use and access of the external memory and processing configurations for such neural network operations are also disclosed.
    Type: Application
    Filed: December 22, 2017
    Publication date: February 7, 2019
    Inventors: Berkin Akin, Seth Pugsley
  • Publication number: 20190042915
    Abstract: Systems and techniques for procedural neural network synaptic connection modes are described herein. A synapse list header may be loaded based on a received spike indication. A spike target generator may then execute a generator function identified in the synapse list header to produce a spike message. Here, the generator function accepts a current synapse value as input to produce the spike message. The spike message may then be communicated a neuron.
    Type: Application
    Filed: March 30, 2018
    Publication date: February 7, 2019
    Inventors: Berkin Akin, Seth Pugsley
  • Publication number: 20190042930
    Abstract: Systems and techniques for neuromorphic accelerator multitasking are described herein. A neuron address translation unit (NATU) may receive a spike message. Here, the spike message includes a physical neuron identifier (PNID) of a neuron causing the spike. The NATU may then translate the PNID into a network identifier (NID) and a local neuron identifier (LNID). The NATU locates synapse data based on the NID and communicates the synapse data and the LNID to an axon processor.
    Type: Application
    Filed: March 27, 2018
    Publication date: February 7, 2019
    Inventors: Seth Pugsley, Berkin Akin
  • Patent number: 10102134
    Abstract: A processor includes a cache, a prefetcher module to select information according to a prefetcher algorithm, and a prefetcher algorithm selection module. The prefetcher algorithm selection module includes logic to select a candidate prefetcher algorithm determine and store memory addresses of predicted memory accesses of the candidate prefetcher algorithm when performed by the prefetcher module, determine cache lines accessed during memory operations, and evaluate whether the determined cache lines match the stored memory addresses. The prefetcher algorithm selection module further includes logic to adjust an accuracy ratio of the candidate prefetcher algorithm, compare the accuracy ratio with a threshold accuracy ratio, and determine whether to apply the first candidate prefetcher algorithm to the prefetcher module.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: October 16, 2018
    Assignee: Intel Corporation
    Inventors: Zeshan A. Chishti, Christopher B. Wilkerson, Seth Pugsley, Peng-Fei Chuang, Robert L. Scott, Aamer Jaleel, Shih-Lien L. Lu, Kingsum Chow
  • Publication number: 20160299847
    Abstract: A processor includes a cache, a prefetcher module to select information according to a prefetcher algorithm, and a prefetcher algorithm selection module. The prefetcher algorithm selection module includes logic to select a candidate prefetcher algorithm determine and store memory addresses of predicted memory accesses of the candidate prefetcher algorithm when performed by the prefetcher module, determine cache lines accessed during memory operations, and evaluate whether the determined cache lines match the stored memory addresses. The prefetcher algorithm selection module further includes logic to adjust an accuracy ratio of the candidate prefetcher algorithm, compare the accuracy ratio with a threshold accuracy ratio, and determine whether to apply the first candidate prefetcher algorithm to the prefetcher module.
    Type: Application
    Filed: June 23, 2016
    Publication date: October 13, 2016
    Inventors: Zeshan A. Chishti, Christopher B. Wilkerson, Seth Pugsley, Peng-Fei Chuang, Robert L. Scott, Aamer Jaleel, Shih-Lien L. Lu, Kingsum Chow
  • Patent number: 9378021
    Abstract: A processor includes a cache, a prefetcher module to select information according to a prefetcher algorithm, and a prefetcher algorithm selection module. The prefetcher algorithm selection module includes logic to select a candidate prefetcher algorithm determine and store memory addresses of predicted memory accesses of the candidate prefetcher algorithm when performed by the prefetcher module, determine cache lines accessed during memory operations, and evaluate whether the determined cache lines match the stored memory addresses. The prefetcher algorithm selection module further includes logic to adjust an accuracy ratio of the candidate prefetcher algorithm, compare the accuracy ratio with a threshold accuracy ratio, and determine whether to apply the first candidate prefetcher algorithm to the prefetcher module.
    Type: Grant
    Filed: February 14, 2014
    Date of Patent: June 28, 2016
    Assignee: Intel Corporation
    Inventors: Zeshan A. Chishti, Christopher B. Wilkerson, Seth Pugsley, Peng-Fei Chuang, Robert L. Scott, Aamer Jaleel, Shih-Lien L. Lu, Kingsum Chow
  • Publication number: 20150234663
    Abstract: A processor includes a cache, a prefetcher module to select information according to a prefetcher algorithm, and a prefetcher algorithm selection module. The prefetcher algorithm selection module includes logic to select a candidate prefetcher algorithm determine and store memory addresses of predicted memory accesses of the candidate prefetcher algorithm when performed by the prefetcher module, determine cache lines accessed during memory operations, and evaluate whether the determined cache lines match the stored memory addresses. The prefetcher algorithm selection module further includes logic to adjust an accuracy ratio of the candidate prefetcher algorithm, compare the accuracy ratio with a threshold accuracy ratio, and determine whether to apply the first candidate prefetcher algorithm to the prefetcher module.
    Type: Application
    Filed: February 14, 2014
    Publication date: August 20, 2015
    Inventors: Zeshan A. Chishti, Christopher B. Wilkerson, Seth Pugsley, Peng-Fei Chuang, Robert L. Scott, Aamer Jaleel, Shih-Lien L. Lu, Kingsum Chow
  • Publication number: 20140173170
    Abstract: A multiple subarray-access memory system is disclosed. The system includes a plurality of memory chips, each including a plurality of subarrays and a memory controller in communication. with the memory chips, the memory controller to receive a memory fetch width (“MFW”) instruction during an operating system start-up and responsive to the MFW instruction to fix a quantity of the subarrays that will be activated in response to memory access requests.
    Type: Application
    Filed: December 14, 2012
    Publication date: June 19, 2014
    Applicant: Hewlett-Packard Development Company, L.P.
    Inventors: Naveen Muralimanohar, Norman P. Jouppi, Rajeev Balasubramonian, Seth Pugsley, Niladrish Chatterjee, Alan Lynn Davis