Patents by Inventor David Sonnier

David Sonnier has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11828152
    Abstract: The present disclosure provides methods and systems for hydrofracturing processes utilizing native drilling cuttings to enhance wellbore permeability. The native drilling cuttings are obtained during drilling operations and may be used in hydrofracturing applications without further grinding or processing. The native drilling cuttings can be combined into a slurry and injected into a well for the hydrofracturing application. In some cases, the native drilling cuttings are dried before combining them with the slurry.
    Type: Grant
    Filed: April 15, 2021
    Date of Patent: November 28, 2023
    Assignee: CHEVRON U.S.A. INC.
    Inventors: Robert Neil Trotter, Caleb Kimbrell Carroll, Harold Gordon Riordan, Leonardo F. Chaves Leal, Matthew David Sonnier, Jonathon Donald Fisher, Harry Martin Fernau, John Thomas Lattal, Amos Sunghyun Kim
  • Publication number: 20230231809
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for dynamic load balancing for multi-core computing environments. An example apparatus includes a first and a plurality of second cores of a processor, and circuitry in a die of the processor separate from the first and the second cores, the circuitry to enqueue identifiers in one or more queues in the circuitry associated with respective ones of data packets of a packet flow, allocate one or more of the second cores to dequeue first ones of the identifiers in response to a throughput parameter of the first core not satisfying a throughput threshold to cause the one or more of the second cores to execute one or more operations on first ones of the data packets, and provide the first ones to one or more data consumers to distribute the first data packets.
    Type: Application
    Filed: January 13, 2023
    Publication date: July 20, 2023
    Inventors: Stephen Palermo, Bradley Chaddick, Gage Eads, Mrittika Ganguli, Abhishek Khade, Abhirupa Layek, Sarita Maini, Niall McDonnell, Rahul Shah, Shrikant Shah, William Burroughs, David Sonnier
  • Patent number: 11575607
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for dynamic load balancing for multi-core computing environments. An example apparatus includes a first and a plurality of second cores of a processor, and circuitry in a die of the processor separate from the first and the second cores, the circuitry to enqueue identifiers in one or more queues in the circuitry associated with respective ones of data packets of a packet flow, allocate one or more of the second cores to dequeue first ones of the identifiers in response to a throughput parameter of the first core not satisfying a throughput threshold to cause the one or more of the second cores to execute one or more operations on first ones of the data packets, and provide the first ones to one or more data consumers to distribute the first data packets.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: February 7, 2023
    Assignee: INTEL CORPORATION
    Inventors: Stephen Palermo, Bradley Chaddick, Gage Eads, Mrittika Ganguli, Abhishek Khade, Abhirupa Layek, Sarita Maini, Niall McDonnell, Rahul Shah, Shrikant Shah, William Burroughs, David Sonnier
  • Publication number: 20220286399
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for hardware queue scheduling for multi-core computing environments. An example apparatus includes a first core and a second core of a processor, and circuitry in a die of the processor, at least one of the first core or the second core included in the die, the at least one of the first core or the second core separate from the circuitry, the circuitry to enqueue an identifier to a queue implemented with the circuitry, the identifier associated with a data packet, assign the identifier in the queue to a first core of the processor, and in response to an execution of an operation on the data packet with the first core, provide the identifier to the second core to cause the second core to distribute the data packet.
    Type: Application
    Filed: September 11, 2020
    Publication date: September 8, 2022
    Inventors: Niall McDonnell, Gage Eads, Mrittika Ganguli, Chetan Hiremath, John Mangan, Stephen Palermo, Bruce Richardson, Edwin Verplanke, Praveen Mosur, Bradley Chaddick, Abhishek Khade, Abhirupa Layek, Sarita Maini, Rahul Shah, Shrikant Shah, William Burroughs, David Sonnier
  • Publication number: 20210324720
    Abstract: The present disclosure provides methods and systems for hydrofracturing processes utilizing native drilling cuttings to enhance wellbore permeability. The native drilling cuttings are obtained during drilling operations and may be used in hydrofracturing applications without further grinding or processing. The native drilling cuttings can be combined into a slurry and injected into a well for the hydrofracturing application. In some cases, the native drilling cuttings are dried before combining them with the slurry.
    Type: Application
    Filed: April 15, 2021
    Publication date: October 21, 2021
    Inventors: Robert Neil Trotter, Caleb Kimbrell Carroll, Harold Gordon Riordan, Leonardo F. Chaves Leal, Matthew David Sonnier, Jonathon Donald Fisher, Harry Martin Fernau, John Thomas Lattal, Amos Sunghyun Kim
  • Publication number: 20210075730
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for dynamic load balancing for multi-core computing environments. An example apparatus includes a first and a plurality of second cores of a processor, and circuitry in a die of the processor separate from the first and the second cores, the circuitry to enqueue identifiers in one or more queues in the circuitry associated with respective ones of data packets of a packet flow, allocate one or more of the second cores to dequeue first ones of the identifiers in response to a throughput parameter of the first core not satisfying a throughput threshold to cause the one or more of the second cores to execute one or more operations on first ones of the data packets, and provide the first ones to one or more data consumers to distribute the first data packets.
    Type: Application
    Filed: September 11, 2020
    Publication date: March 11, 2021
    Inventors: Stephen Palermo, Bradley Chaddick, Gage Eads, Mrittika Ganguli, Abhishek Khade, Abhirupa Layek, Sarita Maini, Niall McDonnell, Rahul Shah, Shrikant Shah, William Burroughs, David Sonnier
  • Patent number: 10929323
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: February 23, 2021
    Assignee: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Andrew Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs
  • Publication number: 20200042479
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Application
    Filed: October 14, 2019
    Publication date: February 6, 2020
    Applicant: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Andrew Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs
  • Patent number: 10445271
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Grant
    Filed: January 4, 2016
    Date of Patent: October 15, 2019
    Assignee: Intel Corporation
    Inventors: Ren Wang, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Yipeng Wang, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs, Andrew J. Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson
  • Publication number: 20190253357
    Abstract: A computing platform includes a classifier to classify a packet and assign a processing load weight to the packet based at least in part on the packet classification; and a load balancer coupled to the classifier to compute a total processing load weight of a queue of a packet processing system and assign the packet to a queue with a lowest total processing load weight.
    Type: Application
    Filed: October 15, 2018
    Publication date: August 15, 2019
    Inventors: Pravin PATHAK, Sundar VEDANTHAM, David SONNIER
  • Patent number: 10216668
    Abstract: Technologies for a distributed hardware queue manager include a compute device having a processor. The processor includes two or more hardware queue managers as well as two or more processor cores. Each processor core can enqueue or dequeue data from the hardware queue manager. Each hardware queue manager can be configured to contain several queue data structures. In some embodiments, the queues are addressed by the processor cores using virtual queue addresses, which are translated into physical queue addresses for accessing the corresponding hardware queue manager. The virtual queues can be moved from one physical queue in one hardware queue manager to a different physical queue in a different physical queue manager without changing the virtual address of the virtual queue.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: February 26, 2019
    Assignee: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Jr-Shian Tsai, Andrew Herdrich, Tsung-Yuan Tai, Niall McDonnell, Stephen Van Doren, David Sonnier, Debra Bernstein, Hugh Wilkinson, Narender Vangati, Stephen Miller, Gage Eads, Andrew Cunningham, Jonathan Kenny, Bruce Richardson, William Burroughs, Joseph Hasting, An Yan, James Clee, Te Ma, Jerry Pirog, Jamison Whitesell
  • Publication number: 20170286337
    Abstract: Technologies for a distributed hardware queue manager include a compute device having a procesor. The processor includes two or more hardware queue managers as well as two or more processor cores. Each processor core can enqueue or dequeue data from the hardware queue manager. Each hardware queue manager can be configured to contain several queue data structures. In some embodiments, the queues are addressed by the processor cores using virtual queue addresses, which are translated into physical queue addresses for accessing the corresponding hardware queue manager. The virtual queues can be moved from one physical queue in one hardware queue manager to a different physical queue in a different physical queue manager without changing the virtual address of the virtual queue.
    Type: Application
    Filed: March 31, 2016
    Publication date: October 5, 2017
    Inventors: Ren Wang, Yipeng Wang, Jr-Shian Tsai, Andrew Herdrich, Tsung-Yuan Tai, Niall McDonnell, Stephen Van Doren, David Sonnier, Debra Bernstein, Hugh Wilkinson, Narender Vangati, Stephen Miller, Gage Eads, Andrew Cunningham, Jonathan Kenny, Bruce Richardson, William Burroughs, Joseph Hasting, An Yan, James Clee, Te Ma, Jerry Pirog, Jamison Whitesell
  • Publication number: 20170192921
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Application
    Filed: January 4, 2016
    Publication date: July 6, 2017
    Inventors: Ren Wang, Yipeng Wang, Andrew J. Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs
  • Patent number: 9195464
    Abstract: Described embodiments provide a method of controlling processing flow in a network processor having one or more processing modules. A given one of the processing modules loads a script into a compute engine. The script includes instructions for the compute engine. The given one of the processing modules loads a register file into the compute engine. The register file includes operands for the instructions of the loaded script. A tracking vector of the compute engine is initialized to a default value, and the compute engine executes the instructions of the loaded script based on the operands of the loaded register file. The compute engine updates corresponding portions of the register file with updated data corresponding to the executed script. The tracking vector tracks the updated portions of the register file. The compute engine provides the tracking vector and the updated register file to the given one of the processing modules.
    Type: Grant
    Filed: December 9, 2011
    Date of Patent: November 24, 2015
    Assignee: Intel Corporation
    Inventors: David Sonnier, Chris Randall Stone, Charles Edward Peet, Jr.
  • Patent number: 9160684
    Abstract: Described embodiments provide for dynamically controlling a scheduling rate of each node in a scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. A traffic manager enqueues received tasks in a queue of the scheduling hierarchy associated with a data flow. The queue has a parent scheduler at each level of the hierarchy up to the root scheduler. The traffic manager maintains one or more scheduling data structures for each node in the scheduling hierarchy. If the traffic manager receives a rate reduction request corresponding to a given node of the scheduling hierarchy, the traffic manager updates one or more indicators in the scheduling data structure corresponding to the given node and removes the given node from the scheduling hierarchy, thereby reducing the scheduling rate of the node.
    Type: Grant
    Filed: September 30, 2011
    Date of Patent: October 13, 2015
    Assignee: Intel Corporation
    Inventors: Balakrishnan Sundararaman, Shashank Nemawarkar, David Sonnier, Allen Vestal
  • Patent number: 8869151
    Abstract: Described embodiments provide for controlling a state of each node in a scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. A traffic manager enqueues received tasks in a queue of the scheduling hierarchy associated with a data flow. The traffic manager maintains scheduling data structures for each node in the scheduling hierarchy. The scheduling data structures include a backpressure indicator and a timer indicator. If the backpressure indicator is set, the traffic manager sets the node as unavailable for scheduling and removes the node from the scheduling hierarchy. If the timer indicator is set, the traffic managers sets the node as unavailable for scheduling. Otherwise, if neither the backpressure indicator nor the timer indicator is set, the traffic manager sets the node as available for scheduling.
    Type: Grant
    Filed: September 30, 2011
    Date of Patent: October 21, 2014
    Assignee: LSI Corporation
    Inventors: Balakrishnan Sundararaman, Shashank Nemawarkar, David Sonnier, Shailendra Aulakh
  • Patent number: 8869150
    Abstract: Described embodiments provide for queuing tasks in a scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. The traffic manager performs a task enqueue operation for the task. The task enqueue operation includes adding the received task to an associated queue of the scheduling hierarchy, where the queue is associated with a data flow of the received task. The queue has a corresponding scheduler level M, where M is a positive integer less than or equal to N. Starting at the queue and iteratively repeating at each scheduling level until reaching the root scheduler, each node in the scheduling hierarchy maintains an actual count of tasks corresponding to the node. Each node communicates a capped task count to a corresponding parent scheduler at a relative next scheduler level.
    Type: Grant
    Filed: September 30, 2011
    Date of Patent: October 21, 2014
    Assignee: LSI Corporation
    Inventors: Balakrishnan Sundararaman, Shashank Nemawarkar, David Sonnier, Allen Vestal
  • Patent number: 8848723
    Abstract: Described embodiments provide for dynamically constructing a scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. The traffic manager queues the received task in the associated queue, the queue having a corresponding parent scheduler at each of one or more next levels of the scheduling hierarchy up to the root scheduler. A parent scheduler selects, starting at the root scheduler and iteratively repeating at each of the corresponding N scheduling levels until a queue is selected, a child node to transmit at least one task. The traffic manager forms output packets for transmission based on the at least one task from the selected queue.
    Type: Grant
    Filed: September 30, 2011
    Date of Patent: September 30, 2014
    Assignee: LSI Corporation
    Inventors: Balakrishnan Sundararaman, Shashank Nemawarkar, David Sonnier, Shailendra Aulakh
  • Patent number: 8837501
    Abstract: Described embodiments provide sharing data between nodes in a scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets, each task having a shared parameter ID. The traffic manager determines the shared parameter ID value of the received task and queues the received task in a queue of the scheduling hierarchy. The queue has a scheduler level M and a parent scheduler at each of M?1 levels in the scheduling hierarchy. The traffic manager determines a shared parameter ID value of the queue. The traffic manager loads, from a shared memory to a corresponding level one cache, one or more shared parameter values corresponding to at least one of the determined shared parameter ID value of the received task and the determined shared parameter ID value of the queue.
    Type: Grant
    Filed: September 30, 2011
    Date of Patent: September 16, 2014
    Assignee: LSI Corporation
    Inventors: Balakrishnan Sundararaman, Shailendra Aulakh, David Sonnier, Shashank Nemawarkar
  • Patent number: 8677075
    Abstract: Described embodiments provide a network processor having a plurality of processing modules coupled to a system cache and a shared memory. A memory manager allocates blocks of the shared memory to a requesting one of the processing modules. The allocated blocks store data corresponding to packets received by the network processor. The memory manager maintains a reference count for each allocated memory block indicating a number of processing modules accessing the block. One of the processing modules reads the data stored in the allocated memory blocks, stores the read data to corresponding entries of the system cache and operates on the data stored in the system cache. Upon completion of operation on the data, the processing module requests to decrement the reference count of each memory block. Based on the reference count, the memory manager invalidates the entries of the system cache and deallocates the memory blocks.
    Type: Grant
    Filed: January 27, 2012
    Date of Patent: March 18, 2014
    Assignee: LSI Corporation
    Inventors: Deepak Mital, William Burroughs, David Sonnier, Steven Pollock, David Brown, Joseph Hasting