Patents by Inventor William Burroughs

William Burroughs has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230231809
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for dynamic load balancing for multi-core computing environments. An example apparatus includes a first and a plurality of second cores of a processor, and circuitry in a die of the processor separate from the first and the second cores, the circuitry to enqueue identifiers in one or more queues in the circuitry associated with respective ones of data packets of a packet flow, allocate one or more of the second cores to dequeue first ones of the identifiers in response to a throughput parameter of the first core not satisfying a throughput threshold to cause the one or more of the second cores to execute one or more operations on first ones of the data packets, and provide the first ones to one or more data consumers to distribute the first data packets.
    Type: Application
    Filed: January 13, 2023
    Publication date: July 20, 2023
    Inventors: Stephen Palermo, Bradley Chaddick, Gage Eads, Mrittika Ganguli, Abhishek Khade, Abhirupa Layek, Sarita Maini, Niall McDonnell, Rahul Shah, Shrikant Shah, William Burroughs, David Sonnier
  • Patent number: 11575607
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for dynamic load balancing for multi-core computing environments. An example apparatus includes a first and a plurality of second cores of a processor, and circuitry in a die of the processor separate from the first and the second cores, the circuitry to enqueue identifiers in one or more queues in the circuitry associated with respective ones of data packets of a packet flow, allocate one or more of the second cores to dequeue first ones of the identifiers in response to a throughput parameter of the first core not satisfying a throughput threshold to cause the one or more of the second cores to execute one or more operations on first ones of the data packets, and provide the first ones to one or more data consumers to distribute the first data packets.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: February 7, 2023
    Assignee: INTEL CORPORATION
    Inventors: Stephen Palermo, Bradley Chaddick, Gage Eads, Mrittika Ganguli, Abhishek Khade, Abhirupa Layek, Sarita Maini, Niall McDonnell, Rahul Shah, Shrikant Shah, William Burroughs, David Sonnier
  • Publication number: 20220286399
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for hardware queue scheduling for multi-core computing environments. An example apparatus includes a first core and a second core of a processor, and circuitry in a die of the processor, at least one of the first core or the second core included in the die, the at least one of the first core or the second core separate from the circuitry, the circuitry to enqueue an identifier to a queue implemented with the circuitry, the identifier associated with a data packet, assign the identifier in the queue to a first core of the processor, and in response to an execution of an operation on the data packet with the first core, provide the identifier to the second core to cause the second core to distribute the data packet.
    Type: Application
    Filed: September 11, 2020
    Publication date: September 8, 2022
    Inventors: Niall McDonnell, Gage Eads, Mrittika Ganguli, Chetan Hiremath, John Mangan, Stephen Palermo, Bruce Richardson, Edwin Verplanke, Praveen Mosur, Bradley Chaddick, Abhishek Khade, Abhirupa Layek, Sarita Maini, Rahul Shah, Shrikant Shah, William Burroughs, David Sonnier
  • Publication number: 20210075730
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for dynamic load balancing for multi-core computing environments. An example apparatus includes a first and a plurality of second cores of a processor, and circuitry in a die of the processor separate from the first and the second cores, the circuitry to enqueue identifiers in one or more queues in the circuitry associated with respective ones of data packets of a packet flow, allocate one or more of the second cores to dequeue first ones of the identifiers in response to a throughput parameter of the first core not satisfying a throughput threshold to cause the one or more of the second cores to execute one or more operations on first ones of the data packets, and provide the first ones to one or more data consumers to distribute the first data packets.
    Type: Application
    Filed: September 11, 2020
    Publication date: March 11, 2021
    Inventors: Stephen Palermo, Bradley Chaddick, Gage Eads, Mrittika Ganguli, Abhishek Khade, Abhirupa Layek, Sarita Maini, Niall McDonnell, Rahul Shah, Shrikant Shah, William Burroughs, David Sonnier
  • Patent number: 10929323
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: February 23, 2021
    Assignee: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Andrew Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs
  • Publication number: 20200042479
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Application
    Filed: October 14, 2019
    Publication date: February 6, 2020
    Applicant: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Andrew Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs
  • Publication number: 20200004584
    Abstract: In an embodiment, a processor for queue selection includes a plurality of processing engines (PEs) to execute threads, and a hardware queue manager. The hardware queue manager is to: detect that a first class lacks valid requests to be scheduled, the first class comprising a first plurality of scheduling queues, the first class associated with a first credit count; select a second class based on a second credit count associated with the second class, the second class comprising a second plurality of scheduling queues; and in response to a selection of the second class based on the second credit count, select a queue in the selected second class. Other embodiments are described and claimed.
    Type: Application
    Filed: June 28, 2018
    Publication date: January 2, 2020
    Inventors: William Burroughs, James Clee, Ambalavanar Arulambalam, Joseph Hasting, Niall Mcdonnell
  • Patent number: 10445271
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Grant
    Filed: January 4, 2016
    Date of Patent: October 15, 2019
    Assignee: Intel Corporation
    Inventors: Ren Wang, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Yipeng Wang, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs, Andrew J. Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson
  • Patent number: 10216668
    Abstract: Technologies for a distributed hardware queue manager include a compute device having a processor. The processor includes two or more hardware queue managers as well as two or more processor cores. Each processor core can enqueue or dequeue data from the hardware queue manager. Each hardware queue manager can be configured to contain several queue data structures. In some embodiments, the queues are addressed by the processor cores using virtual queue addresses, which are translated into physical queue addresses for accessing the corresponding hardware queue manager. The virtual queues can be moved from one physical queue in one hardware queue manager to a different physical queue in a different physical queue manager without changing the virtual address of the virtual queue.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: February 26, 2019
    Assignee: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Jr-Shian Tsai, Andrew Herdrich, Tsung-Yuan Tai, Niall McDonnell, Stephen Van Doren, David Sonnier, Debra Bernstein, Hugh Wilkinson, Narender Vangati, Stephen Miller, Gage Eads, Andrew Cunningham, Jonathan Kenny, Bruce Richardson, William Burroughs, Joseph Hasting, An Yan, James Clee, Te Ma, Jerry Pirog, Jamison Whitesell
  • Publication number: 20190007318
    Abstract: Technologies for inflight packet count limiting include a network device. The network device is to receive a packet from a producer application. The packet is configured to be enqueued into a packet queue as a queue element to be consumed by a consumer application. The network device is also to increment, in response to receipt of the packet, an inflight count variable, determine whether a value of the inflight count variable satisfies an inflight count limit, and enqueue, in response to a determination that the value of the inflight count variable satisfies the inflight count limit, the packet.
    Type: Application
    Filed: June 30, 2017
    Publication date: January 3, 2019
    Inventors: Niall D. McDonnell, William Burroughs, Nitin N. Garegrat, David P. Sonnier
  • Patent number: 9864633
    Abstract: An network processor is described that is configured to multicast multiple data packets to one or more engines. In one or more implementations, the network processor includes an input/output adapter configured to parse a plurality of tasks. The input/output adapter includes a multicast module configured to determine a reference count value based upon a maximum multicast value of the plurality of tasks. The input/output adapter is also configured to set a reference count decrement value within the control data portion of the plurality of tasks. The reference count decrement value is based upon the maximum multicast value. The input/output adapter is also configured to decrement the reference count value by a corresponding reference count decrement value upon receiving an indication from an engine.
    Type: Grant
    Filed: July 27, 2015
    Date of Patent: January 9, 2018
    Assignee: Intel Corporation
    Inventors: Deepak Mital, Joseph A. Manzella, Ritchie J. Peachey, William Burroughs
  • Publication number: 20170286337
    Abstract: Technologies for a distributed hardware queue manager include a compute device having a procesor. The processor includes two or more hardware queue managers as well as two or more processor cores. Each processor core can enqueue or dequeue data from the hardware queue manager. Each hardware queue manager can be configured to contain several queue data structures. In some embodiments, the queues are addressed by the processor cores using virtual queue addresses, which are translated into physical queue addresses for accessing the corresponding hardware queue manager. The virtual queues can be moved from one physical queue in one hardware queue manager to a different physical queue in a different physical queue manager without changing the virtual address of the virtual queue.
    Type: Application
    Filed: March 31, 2016
    Publication date: October 5, 2017
    Inventors: Ren Wang, Yipeng Wang, Jr-Shian Tsai, Andrew Herdrich, Tsung-Yuan Tai, Niall McDonnell, Stephen Van Doren, David Sonnier, Debra Bernstein, Hugh Wilkinson, Narender Vangati, Stephen Miller, Gage Eads, Andrew Cunningham, Jonathan Kenny, Bruce Richardson, William Burroughs, Joseph Hasting, An Yan, James Clee, Te Ma, Jerry Pirog, Jamison Whitesell
  • Publication number: 20170192921
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Application
    Filed: January 4, 2016
    Publication date: July 6, 2017
    Inventors: Ren Wang, Yipeng Wang, Andrew J. Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs
  • Publication number: 20150331718
    Abstract: An network processor is described that is configured to multicast multiple data packets to one or more engines. In one or more implementations, the network processor includes an input/output adapter configured to parse a plurality of tasks. The input/output adapter includes a multicast module configured to determine a reference count value based upon a maximum multicast value of the plurality of tasks. The input/output adapter is also configured to set a reference count decrement value within the control data portion of the plurality of tasks. The reference count decrement value is based upon the maximum multicast value. The input/output adapter is also configured to decrement the reference count value by a corresponding reference count decrement value upon receiving an indication from an engine.
    Type: Application
    Filed: July 27, 2015
    Publication date: November 19, 2015
    Inventors: Deepak Mital, Joseph A. Manzella, Ritchie J. Peachey, William Burroughs
  • Patent number: 9152564
    Abstract: Described embodiments provide an input/output interface of a network processor that generates a request to store received packets to a system cache. If an entry associated with the received packet does not exist in the system cache, the system cache determines whether a backpressure indicator of the system cache is set. If the backpressure indicator is set, the received packet is written to the shared memory. If the backpressure indicator is not set, the system cache determines whether to evict data from the system cache in order to store the received packet. If an eviction rate of the system cache has reached a threshold, the system cache sets a backpressure indicator and writes the received packet to the shared memory. If the eviction rate has not reached the threshold, the system cache determines an available entry and writes the received packet to the available entry in the system cache.
    Type: Grant
    Filed: November 28, 2012
    Date of Patent: October 6, 2015
    Assignee: Intel Corporation
    Inventors: Deepak Mital, William Burroughs
  • Patent number: 9154442
    Abstract: Described embodiments process hash operation requests of a network processor. A hash processor determines a job identifier, a corresponding hash table, and a setting of a traversal indicator for a received hash operation request that includes a desired key. The hash processor concurrently generates a read request for a first bucket of the hash table, and provides the job identifier, the key and the traversal indicator to a read return processor. The read return processor stores the key and traversal indicator in a job memory and stores, in a return memory, entries of the first bucket of the hash table. If a stored entry matches the desired key, the read return processor determines, based on the traversal indicator, whether to read a next bucket of the hash table and provides the job identifier, the matching key, and the address of the bucket containing the matching key to the hash processor.
    Type: Grant
    Filed: July 17, 2013
    Date of Patent: October 6, 2015
    Assignee: Intel Corporation
    Inventors: Deepak Mital, Mohammad Reza Hakami, William Burroughs
  • Patent number: 9094219
    Abstract: An network processor is described that is configured to multicast multiple data packets to one or more engines. In one or more implementations, the network processor includes an input/output adapter configured to parse a plurality of tasks. The input/output adapter includes a multicast module configured to determine a reference count value based upon a maximum multicast value of the plurality of tasks. The input/output adapter is also configured to set a reference count decrement value within the control data portion of the plurality of tasks. The reference count decrement value is based upon the maximum multicast value. The input/output adapter is also configured to decrement the reference count value by a corresponding reference count decrement value upon receiving an indication from an engine.
    Type: Grant
    Filed: March 12, 2013
    Date of Patent: July 28, 2015
    Assignee: Intel Corporation
    Inventors: Deepak Mital, Joseph A. Manzella, Ritchie J. Peachey, William Burroughs
  • Patent number: 8949838
    Abstract: Described embodiments process multiple threads of commands in a network processor. One or more tasks are generated corresponding to each received packet, and the tasks are provided to a packet processor module (MPP). A scheduler associates each received task with a command flow. A thread updater writes state data corresponding to the flow to a context memory. The scheduler determines an order of processing of the command flows. When a processing thread of a multi-thread processor is available, the thread updater loads, from the context memory, state data for at least one scheduled flow to one of the multi-thread processors. The multi-thread processor processes a next command of the flow based on the loaded state data. If the processed command requires operation of a co-processor module, the multi-thread processor sends a co-processor request and switches command processing from the first flow to a second flow.
    Type: Grant
    Filed: May 17, 2012
    Date of Patent: February 3, 2015
    Assignee: LSI Corporation
    Inventors: Deepak Mital, William Burroughs, Eran Dosh, Eyal Rosin
  • Patent number: 8910168
    Abstract: Described embodiments generate tasks corresponding to packets received by a network processor. A source processing module sends task messages including a task identifier and a task size to a destination processing module. The destination module receives the task message and determines a queue in which to store the task. Based on a used cache counter of the queue and a number of cache lines for the received task, the destination module determines whether the queue has reached a usage threshold. If the queue has reached the threshold, the destination module sends a backpressure message to the source module. Otherwise, if the queue has not reached the threshold, the destination module accepts the received task, stores data of the received task in the queue, increments the used cache counter for the queue corresponding to the number of cache lines for the received task, and processes the received task.
    Type: Grant
    Filed: November 28, 2012
    Date of Patent: December 9, 2014
    Assignee: LSI Corporation
    Inventors: Deepak Mital, William Burroughs, Michael R. Betker
  • Patent number: 8873550
    Abstract: Described embodiments generate tasks corresponding to each packet received by a network processor. A destination processing module receives a task and determines, based on the task size, a queue in which to store the task, and whether the task is larger than space available within a current memory block of the queue. If the task is larger, an address of a next memory block in a memory is determined, and the address is provided to a source processing module of the task. The source processing module writes the task to the memory based on a provided offset address and the address of the next memory block, if provided. If a task is written to more than one memory block, the destination processing module preloads the address of the next memory block to a local memory to process queued tasks without stalling to retrieve the address of the next memory block.
    Type: Grant
    Filed: November 28, 2012
    Date of Patent: October 28, 2014
    Assignee: LSI Corporation
    Inventors: Deepak Mital, William Burroughs, Michael R. Betker, Joseph R. Hasting