Patents by Inventor Charles Edward Peet, Jr.

Charles Edward Peet, Jr. has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9218290
    Abstract: Described embodiments provide for storing data in a local cache of one of a plurality of processing modules of a network processor. A control processing module determines presence of data stored in its local cache while concurrently sending a request to read the data from a shared memory and from one or more local caches corresponding to other of the plurality of processing modules. Each of the plurality of processing modules responds whether the data is located in one or more corresponding local caches. The control processing module determines, based on the responses, presence of the data in the local caches corresponding to the other processing modules. If the data is present in one of the local caches corresponding to one of the other processing modules, the control processing module reads the data from the local cache containing the data and cancels the read request to the shared memory.
    Type: Grant
    Filed: July 27, 2011
    Date of Patent: December 22, 2015
    Assignee: Intel Corporation
    Inventors: David P. Sonnier, David A. Brown, Charles Edward Peet, Jr.
  • Patent number: 9195464
    Abstract: Described embodiments provide a method of controlling processing flow in a network processor having one or more processing modules. A given one of the processing modules loads a script into a compute engine. The script includes instructions for the compute engine. The given one of the processing modules loads a register file into the compute engine. The register file includes operands for the instructions of the loaded script. A tracking vector of the compute engine is initialized to a default value, and the compute engine executes the instructions of the loaded script based on the operands of the loaded register file. The compute engine updates corresponding portions of the register file with updated data corresponding to the executed script. The tracking vector tracks the updated portions of the register file. The compute engine provides the tracking vector and the updated register file to the given one of the processing modules.
    Type: Grant
    Filed: December 9, 2011
    Date of Patent: November 24, 2015
    Assignee: Intel Corporation
    Inventors: David Sonnier, Chris Randall Stone, Charles Edward Peet, Jr.
  • Patent number: 9183145
    Abstract: Described embodiments provide a method of coherently storing data in a network processor having a plurality of processing modules and a shared memory. A control processor sends an atomic update request to a configuration controller. The atomic update request corresponds to data stored in the shared memory, the data also stored in a local pipeline cache corresponding to a client processing module. The configuration controller sends the atomic update request to the client processing modules. Each client processing module determines presence of an active access operation of a cache line in the local cache corresponding to the data of the atomic update request. If the active access operation of the cache line is absent, the client processing module writes the cache line from the local cache to shared memory, clears a valid indicator corresponding to the cache line and updates the data corresponding to the atomic update request.
    Type: Grant
    Filed: July 27, 2011
    Date of Patent: November 10, 2015
    Assignee: Intel Corporation
    Inventors: David P. Sonnier, David A. Brown, Charles Edward Peet, Jr.
  • Patent number: 8683221
    Abstract: Described embodiments provide a method of coordinating debugging operations in a network processor. The network processor has one or more processing modules. A system cache of the network processor requests a data transfer between the system cache and at least one external memory. A memory interface of the network processor selects an encrypted data pipeline or a non-encrypted data pipeline based on whether the processed data transfer request includes an encrypted operation. If the data transfer request includes an encrypted operation, the memory interface provides the data transfer to the encrypted data pipeline and checks whether a debug indicator is set for the data transfer request. If the debug indicator is set, the memory interface disables encryption/decryption of the encrypted data pipeline. The data transfer request is performed by the encrypted data pipeline to the at least one external memory.
    Type: Grant
    Filed: October 17, 2011
    Date of Patent: March 25, 2014
    Assignee: LSI Corporation
    Inventors: Charles Edward Peet, Jr., Michael Betker
  • Publication number: 20120084498
    Abstract: Described embodiments provide a method of controlling processing flow in a network processor having one or more processing modules. A given one of the processing modules loads a script into a compute engine. The script includes instructions for the compute engine. The given one of the processing modules loads a register file into the compute engine. The register file includes operands for the instructions of the loaded script. A tracking vector of the compute engine is initialized to a default value, and the compute engine executes the instructions of the loaded script based on the operands of the loaded register file. The compute engine updates corresponding portions of the register file with updated data corresponding to the executed script. The tracking vector tracks the updated portions of the register file. The compute engine provides the tracking vector and the updated register file to the given one of the processing modules.
    Type: Application
    Filed: December 9, 2011
    Publication date: April 5, 2012
    Inventors: David Sonnier, Chris Randall Stone, Charles Edwards Peet, JR.
  • Publication number: 20120036351
    Abstract: Described embodiments provide a method of coordinating debugging operations in a network processor. The network processor has one or more processing modules. A system cache of the network processor requests a data transfer between the system cache and at least one external memory. A memory interface of the network processor selects an encrypted data pipeline or a non-encrypted data pipeline based on whether the processed data transfer request includes an encrypted operation. If the data transfer request includes an encrypted operation, the memory interface provides the data transfer to the encrypted data pipeline and checks whether a debug indicator is set for the data transfer request. If the debug indicator is set, the memory interface disables encryption/decryption of the encrypted data pipeline. The data transfer request is performed by the encrypted data pipeline to the at least one external memory.
    Type: Application
    Filed: October 17, 2011
    Publication date: February 9, 2012
    Inventors: Charles Edward Peet, JR., Michael Betker
  • Publication number: 20110289180
    Abstract: Described embodiments provide for storing data in a local cache of one of a plurality of processing modules of a network processor. A control processing module determines presence of data stored in its local cache while concurrently sending a request to read the data from a shared memory and from one or more local caches corresponding to other of the plurality of processing modules. Each of the plurality of processing modules responds whether the data is located in one or more corresponding local caches. The control processing module determines, based on the responses, presence of the data in the local caches corresponding to the other processing modules. If the data is present in one of the local caches corresponding to one of the other processing modules, the control processing module reads the data from the local cache containing the data and cancels the read request to the shared memory.
    Type: Application
    Filed: July 27, 2011
    Publication date: November 24, 2011
    Inventors: David P. Sonnier, David A. Brown, Charles Edward Peet, JR.
  • Publication number: 20110289279
    Abstract: Described embodiments provide a method of coherently storing data in a network processor having a plurality of processing modules and a shared memory. A control processor sends an atomic update request to a configuration controller. The atomic update request corresponds to data stored in the shared memory, the data also stored in a local pipeline cache corresponding to a client processing module. The configuration controller sends the atomic update request to the client processing modules. Each client processing module determines presence of an active access operation of a cache line in the local cache corresponding to the data of the atomic update request. If the active access operation of the cache line is absent, the client processing module writes the cache line from the local cache to shared memory, clears a valid indicator corresponding to the cache line and updates the data corresponding to the atomic update request.
    Type: Application
    Filed: July 27, 2011
    Publication date: November 24, 2011
    Inventors: David P. Sonnier, David A. Brown, Charles Edward Peet, JR.
  • Patent number: 6249756
    Abstract: An improved hybrid flow control protocol for providing FIFO capacity to prevent overflow due to bytes arriving after the FIFO indicates it is not ready to receive any more bytes utilizes a combination of a high/low watermark and credit based system. In one embodiment, when the byte count exceed the high watermark fixed credits are sent when N bytes are pulled from the FIFO. In a second embodiment, variable credits are sent depending on the difference between the number of bytes received in and pulled from the FIFO.
    Type: Grant
    Filed: December 7, 1998
    Date of Patent: June 19, 2001
    Assignee: Compaq Computer Corp.
    Inventors: William Patterson Bunton, David A. Brown, David T. Heron, Charles Edward Peet, Jr., William Joel Watson, John C. Krause