Patents by Inventor Gregg Bouchard

Gregg Bouchard has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150121395
    Abstract: A method, and corresponding apparatus, of managing processing thread migrations within a plurality of memory clusters, includes embedding, in memory components of the plurality of memory clusters, instructions indicative of processing thread migrations; storing, in one or more memory components of a particular memory cluster among the plurality of memory clusters, data configured to designate the particular memory cluster as a sink memory cluster, the sink memory cluster preventing an incoming migrated processing thread from migrating out of the sink memory cluster; and processing one or more processing threads, in one or more of the plurality of memory clusters, in accordance with at least one of the embedded migration instructions and the data stored in the one or more memory components of the sink memory cluster.
    Type: Application
    Filed: January 8, 2015
    Publication date: April 30, 2015
    Inventors: Najeeb I. Ansari, Gregg A. Bouchard, Rajan Goyal, Jeffrey A. Pangborn, Satyanarayana Lakshmipathi Billa
  • Patent number: 8995449
    Abstract: A packet processor provides for rule matching of packets in a network architecture. The packet processor includes a lookup cluster complex having a number of lookup engines and respective on-chip memory units. The on-chip memory stores rules for matching against packet data. Each of the lookup engines receives a key request associated with a packet and determines a subset of the rules to match against the packet data. As a result of the rule matching, the lookup engine returns a response message indicating whether a match is found.
    Type: Grant
    Filed: May 23, 2013
    Date of Patent: March 31, 2015
    Assignee: Cavium, Inc.
    Inventors: Rajan Goyal, Gregg A. Bouchard
  • Publication number: 20150067123
    Abstract: An engine architecture for processing finite automata includes a hyper non-deterministic automata (HNA) processor specialized for non-deterministic finite automata (NFA) processing. The HNA processor includes a plurality of super-clusters and an HNA scheduler. Each super-cluster includes a plurality of clusters. Each cluster of the plurality of clusters includes a plurality of HNA processing units (HPUs). A corresponding plurality of HPUs of a corresponding plurality of clusters of at least one selected super-cluster is available as a resource pool of HPUs to the HNA scheduler for assignment of at least one HNA instruction to enable acceleration of a match of at least one regular expression pattern in an input stream received from a network.
    Type: Application
    Filed: July 8, 2014
    Publication date: March 5, 2015
    Inventors: Rajan Goyal, Satyanarayana Lakshmipathi Billa, Yossef Shanava, Gregg A. Bouchard, Timothy Toshio Nakada
  • Patent number: 8966152
    Abstract: According to an example embodiment, a processor is provided including an integrated on-chip memory device component. The on-chip memory device component includes a plurality of memory banks, and multiple logical ports, each logical port coupled to one or more of the plurality of memory banks, enabling access to multiple memory banks, among the plurality of memory banks, per clock cycle, each memory bank accessible by a single logical port per clock cycle and each logical port accessing a single memory bank per clock cycle.
    Type: Grant
    Filed: August 2, 2012
    Date of Patent: February 24, 2015
    Assignee: Cavium, Inc.
    Inventors: Gregg A. Bouchard, Rajan Goyal, Jeffrey A. Pangborn, Najeeb I. Ansari
  • Patent number: 8954700
    Abstract: A method, and corresponding apparatus, of managing processing thread migrations within a plurality of memory clusters, includes embedding, in memory components of the plurality of memory clusters, instructions indicative of processing thread migrations; storing, in one or more memory components of a particular memory cluster among the plurality of memory clusters, data configured to designate the particular memory cluster as a sink memory cluster, the sink memory cluster preventing an incoming migrated processing thread from migrating out of the sink memory cluster; and processing one or more processing threads, in one or more of the plurality of memory clusters, in accordance with at least one of the embedded migration instructions and the data stored in the one or more memory components of the sink memory cluster.
    Type: Grant
    Filed: August 2, 2012
    Date of Patent: February 10, 2015
    Assignee: Cavium, Inc.
    Inventors: Najeeb I. Ansari, Gregg A. Bouchard, Rajan Goyal, Jeffrey A. Pangborn, Satyanarayana Lakshmipathi Billa
  • Patent number: 8923306
    Abstract: A packet processor provides for rule matching of packets in a network architecture. The packet processor includes a lookup cluster complex having a number of lookup engines and respective on-chip memory units. The on-chip memory stores rules for matching against packet data. Each of the lookup engines receives a key request associated with a packet and determines a subset of the rules to match against the packet data. Based on a prefetch status, a selection of the subset of rules are retrieved for rule matching. As a result of the rule matching, the lookup engine returns a response message indicating whether a match is found.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: December 30, 2014
    Assignee: Cavium, Inc.
    Inventors: Gregg A. Bouchard, Rajan Goyal
  • Publication number: 20140372709
    Abstract: In one embodiment, a system comprises a memory and a memory controller that provides a cache access path to the memory and a bypass-cache access path to the memory, receives requests to read graph data from the memory on the bypass-cache access path and receives requests to read non-graph data from the memory on the cache access path. A method comprises receiving a request at a memory controller to read graph data from a memory on a bypass-cache access path, receiving a request at the memory controller to read non-graph data from the memory through a cache access path, and arbitrating, in the memory controller, among the requests using arbitration.
    Type: Application
    Filed: August 22, 2014
    Publication date: December 18, 2014
    Inventors: Jeffrey A. Pangborn, Gregg A. Bouchard, Rajan Goyal, Richard E. Kessler, Aseem Maheshwari
  • Publication number: 20140337387
    Abstract: An improved content search mechanism uses a graph that includes intelligent nodes avoids the overhead of post processing and improves the overall performance of a content processing application. An intelligent node is similar to a node in a DFA graph but includes a command. The command in the intelligent node allows additional state for the node to be generated and checked. This additional state allows the content search mechanism to traverse the same node with two different interpretations. By generating state for the node, the graph of nodes does not become exponential. It also allows a user function to be called upon reaching a node, which can perform any desired user tasks, including modifying the input data or position.
    Type: Application
    Filed: July 22, 2014
    Publication date: November 13, 2014
    Inventors: Muhammad R. Hussain, David A. Carlson, Gregg A. Bouchard, Trent Parker
  • Publication number: 20140317353
    Abstract: A network services processor includes an input/output bridge that avoids unnecessary updates to memory when cache blocks storing processed packet data are no longer required. The input/output bridge monitors requests to free buffers in memory received from cores and IO units in the network services processor. Instead of writing the cache block back to the buffer in memory that will be freed, the input/output bridge issues don't write back commands to a cache controller to clear the dirty bit for the selected cache block, thus avoiding wasteful write-backs from cache to memory. After the dirty bit is cleared, the buffer in memory is freed, that is, made available for allocation to store data for another packet.
    Type: Application
    Filed: January 20, 2014
    Publication date: October 23, 2014
    Applicant: Cavium, Inc.
    Inventors: David H. Asher, Gregg A. Bouchard, Richard E. Kessler, Robert A. Sanzone
  • Patent number: 8850125
    Abstract: In one embodiment, a system comprises a memory and a memory controller that provides a cache access path to the memory and a bypass-cache access path to the memory, receives requests to read graph data from the memory on the bypass-cache access path and receives requests to read non-graph data from the memory on the cache access path. A method comprises receiving a request at a memory controller to read graph data from a memory on a bypass-cache access path, receiving a request at the memory controller to read non-graph data from the memory through a cache access path, and arbitrating, in the memory controller, among the requests using arbitration.
    Type: Grant
    Filed: October 25, 2011
    Date of Patent: September 30, 2014
    Assignee: Cavium, Inc.
    Inventors: Jeffrey Pangborn, Gregg A. Bouchard, Rajan Goyal, Richard E. Kessler, Aseem Maheshwari
  • Patent number: 8850101
    Abstract: In one embodiment, a system comprises a plurality of memory ports. The memory ports are distributed into a plurality of subsets, where each subset is identified by a subset index. The system further comprises a first address hashing unit configured to receive a request including at least one virtual memory address. Each virtual memory address is associated with a replication factor, and the virtual memory address refers to graph data. The first address hashing unit translates the replication factor into a corresponding subset index based on the virtual memory address, and converts the virtual memory address to a hardware based memory address. The hardware based address refers to data in the memory ports within a subset indicated by the corresponding subset index.
    Type: Grant
    Filed: September 11, 2013
    Date of Patent: September 30, 2014
    Assignee: Cavium, Inc.
    Inventors: Jeffrey A. Pangborn, Gregg A. Bouchard, Rajan Goyal, Richard E. Kessler
  • Publication number: 20140279806
    Abstract: According to at least one example embodiment, a method and a corresponding accumulator scoreboard for managing bundles of rule matching threads processed by one or more rule matching engines comprise: recording, for each rule matching thread in a given bundle of rule matching threads, a rule matching result in association with a priority corresponding to the respective rule matching thread; determining a final rule matching result, for the given bundle of rule matching threads, based at least in part on the corresponding indications of priorities; and generating a response state indicative of the determined final rule matching result for reporting to a host processor or a requesting processing engine.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Applicant: CAVIUM, INC.
    Inventors: Najeeb I. Ansari, Gregg A. Bouchard, Rajan Goyal, Jeffrey A. Pangborn
  • Publication number: 20140269718
    Abstract: A packet processor provides for rule matching of packets in a network architecture. The packet processor includes a lookup cluster complex having a number of lookup engines and respective on-chip memory units. The on-chip memory stores rules for matching against packet data. A lookup front-end receives lookup requests from a host, and processes these lookup requests to generate key requests for forwarding to the lookup engines. Based on information in the packet, the lookup front-end can optimize start times for sending key requests as a continuous stream with minimal delay. As a result of the rule matching, the lookup engine returns a response message indicating whether a match is found.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Inventors: Rajan Goyal, Gregg A. Bouchard, Karen A. Szypulski, Charles D. Spackman
  • Publication number: 20140279805
    Abstract: In a network search processor, configured to handle search requests in a router, a scheduler for scheduling rule matching threads initiated by a plurality of initiating engines is designed to make efficient use of the resources in the network search processor while providing high speed performance. According to at least one example embodiment, the scheduler and a corresponding scheduling method comprise: determining a set of bundles of rule matching threads, each bundle being initiated by a separate initiating engine; distributing rule matching threads in each bundle into a number of subgroups of rule matching threads; assigning the subgroups of rule matching threads associated with each bundle of the set of bundles to multiple scheduling queues; and sending rule matching threads, assigned to each scheduling queue, to rule matching engines according to an order based on priorities associated with the respective bundles of rule matching threads.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Applicant: CAVIUM, INC.
    Inventors: Jeffrey A. Pangborn, Najeeb I. Ansari, Gregg A. Bouchard, Rajan Goyal
  • Patent number: 8818921
    Abstract: An improved content search mechanism uses a graph that includes intelligent nodes avoids the overhead of post processing and improves the overall performance of a content processing application. An intelligent node is similar to a node in a DFA graph but includes a command. The command in the intelligent node allows additional state for the node to be generated and checked. This additional state allows the content search mechanism to traverse the same node with two different interpretations. By generating state for the node, the graph of nodes does not become exponential. It also allows a user function to be called upon reaching a node, which can perform any desired user tasks, including modifying the input data or position.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: August 26, 2014
    Assignee: Cavium, Inc.
    Inventors: Muhammad R. Hussain, David A. Carlson, Gregg A. Bouchard, Trent Parker
  • Publication number: 20140215478
    Abstract: A packet processor provides for rule matching of packets in a network architecture. The packet processor includes a lookup cluster complex having a number of lookup engines and respective on-chip memory units. The on-chip memory stores rules for matching against packet data. Each of the lookup engines receives a key request associated with a packet and determines a subset of the rules to match against the packet data. A work product may be migrated between lookup engines to complete the rule matching process. As a result of the rule matching, the lookup engine returns a response message indicating whether a match is found.
    Type: Application
    Filed: March 31, 2014
    Publication date: July 31, 2014
    Applicant: Cavium, Inc.
    Inventors: Rajan Goyal, Gregg A. Bouchard
  • Patent number: 8782287
    Abstract: A packet processing system comprises first processing circuitry for performing a first function, and first memory circuitry coupled to the first processing circuitry for storing received packets, wherein at least a portion of the packets stored by the first memory circuitry are usable by the first processing circuitry in accordance with the first function. The packet processing system further comprises at least second processing circuitry for performing a second function, and at least second memory circuitry coupled to the second processing circuitry for storing at least a portion of the same packets stored in the first memory circuitry, wherein at least a portion of the packets stored in the second memory circuitry are usable by the second processing circuitry in accordance with the second function. In an illustrative embodiment, the first processing circuitry and the second processing circuitry operate in a packet switching device such as a router.
    Type: Grant
    Filed: December 21, 2001
    Date of Patent: July 15, 2014
    Assignee: Agere Systems LLC
    Inventors: Gregg A. Bouchard, Mauricio Calle, Joel R. Davidson, Michael W. Hathaway, James T. Kirk, Christopher Brian Walton
  • Publication number: 20140188973
    Abstract: A packet processor provides for rule matching of packets in a network architecture. The packet processor includes a lookup cluster complex having a number of lookup engines and respective on-chip memory units. The on-chip memory stores rules for matching against packet data. A lookup front-end receives lookup requests from a host, and processes these lookup requests to generate key requests for forwarding to the lookup engines. As a result of the rule matching, the lookup engine returns a response message indicating whether a match is found. The lookup front-end further processes the response message and provides a corresponding response to the host.
    Type: Application
    Filed: November 18, 2013
    Publication date: July 3, 2014
    Applicant: Cavium, Inc.
    Inventors: Rajan Goyal, Gregg A. Bouchard, Jeffrey R. Hardesty, Troy S. Dahlmann, Karen A. Szypulski
  • Patent number: 8719331
    Abstract: A packet processor provides for rule matching of packets in a network architecture. The packet processor includes a lookup cluster complex having a number of lookup engines and respective on-chip memory units. The on-chip memory stores rules for matching against packet data. Each of the lookup engines receives a key request associated with a packet and determines a subset of the rules to match against the packet data. A work product may be migrated between lookup engines to complete the rule matching process. As a result of the rule matching, the lookup engine returns a response message indicating whether a match is found.
    Type: Grant
    Filed: August 2, 2012
    Date of Patent: May 6, 2014
    Assignee: Cavium, Inc.
    Inventors: Rajan Goyal, Gregg A. Bouchard
  • Publication number: 20140119378
    Abstract: A packet processor provides for rule matching of packets in a network architecture. The packet processor includes a lookup cluster complex having a number of lookup engines and respective on-chip memory units. The on-chip memory stores rules for matching against packet data. A lookup front-end receives lookup requests from a host, and processes these lookup requests to generate key requests for forwarding to the lookup engines. As a result of the rule matching, the lookup engine returns a response message indicating whether a match is found. The lookup front-end further processes the response message and provides a corresponding response to the host.
    Type: Application
    Filed: December 23, 2013
    Publication date: May 1, 2014
    Applicant: Cavium, Inc.
    Inventors: Rajan Goyal, Gregg A. Bouchard