Patents by Inventor Steven W. Zagorianakos

Steven W. Zagorianakos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10911038
    Abstract: A network flow processor integrated circuit includes a plurality of processors, a plurality of multi-threaded transactional memories (MTMs), and a configurable mesh posted transaction data bus. The configurable mesh posted transaction data bus includes a configurable command mesh and a configurable data mesh. Each of these configurable meshes includes crossbar switches and interconnecting links. A command bus transaction value issued by a processor can pass across the command mesh to an MTM. The command bus transaction bus value includes a reference value. The MTM uses the reference value to pull data across the configurable data mesh into the MTM. The MTM then uses the data to carry out the commanded transactional memory operation. Multiple such commands can pass across the posted transaction bus across different parts of the integrated circuit at the same time, and a single MTM can be carrying out multiple such operations at the same time.
    Type: Grant
    Filed: January 15, 2019
    Date of Patent: February 2, 2021
    Assignee: Netronome Systems, Inc.
    Inventors: Gavin J. Stark, Steven W. Zagorianakos, Ronald N. Fortino
  • Patent number: 10362093
    Abstract: Multiple processors share access, via a bus, to a pipelined NFA engine. The NFA engine can implement an NFA of the type that is not a DFA (namely, it can be in multiple states at the same time). One of the processors communicates a configuration command, a go command, and an event generate command across the bus to the NFA engine. The event generate command includes a reference value. The configuration command causes the NFA engine to be configured. The go command causes the configured NFA engine to perform a particular NFA operation. Upon completion of the NFA operation, the event generate command causes the reference value to be returned back across the bus to the processor.
    Type: Grant
    Filed: January 9, 2014
    Date of Patent: July 23, 2019
    Assignee: Netronome Systems, Inc.
    Inventors: Gavin J. Stark, Steven W. Zagorianakos
  • Patent number: 10031758
    Abstract: A dispatcher circuit receives sets of instructions from an instructing entity. Instructions of the set of a first type are put into a first queue circuit, instructions of the set of a second type are put into a second queue circuit, and so forth. The first queue circuit dispatches instructions of the first type to one or more processing engines and records when the instructions of the set are completed. When all the instructions of the set of the first type have been completed, then the first queue circuit sends the second queue circuit a go signal, which causes the second queue circuit to dispatch instructions of the second type and to record when they have been completed. This process proceeds from queue circuit to queue circuit. When all the instructions of the set have been completed, then the dispatcher circuit returns an “instructions done” to the original instructing entity.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: July 24, 2018
    Assignee: Netronome Systems, Inc.
    Inventors: Frank J. Zappulla, Steven W. Zagorianakos, Rajesh Vaidheeswarran
  • Patent number: 9971720
    Abstract: An island-based integrated circuit includes a configurable mesh data bus. The data bus includes four meshes. Each mesh includes, for each island, a crossbar switch and radiating half links. The half links of adjacent islands align to form links between crossbar switches. A link is implemented as two distributed credit FIFOs. In one direction, a link portion involves a FIFO associated with an output port of a first island, a first chain of registers, and a second FIFO associated with an input port of a second island. When a transaction value passes through the FIFO and through the crossbar switch of the second island, an arbiter in the crossbar switch returns a taken signal. The taken signal passes back through a second chain of registers to a credit count circuit in the first island. The credit count circuit maintains a credit count value for the distributed credit FIFO.
    Type: Grant
    Filed: May 29, 2015
    Date of Patent: May 15, 2018
    Assignee: Netronome Systems, Inc.
    Inventors: Gavin J. Stark, Steven W. Zagorianakos, Ronald N. Fortino
  • Patent number: 9940097
    Abstract: A registered synchronous FIFO has a tail register, internal registers, and a head register. The FIFO cannot be pushed if it is full and cannot be popped if it is empty, but otherwise can be pushed and/or popped. Within the FIFO, the internal signal fanout of incoming data circuitry and push control circuitry and is minimized and remains essentially constant regardless of the number of registers of the FIFO. The output delay of the output data also is essentially constant regardless of the number of registers of the FIFO. An incoming data value can only be written into the head or tail. If a data value is in the tail and one of the internal registers is empty, and if no push or pop is to be performed in a clock cycle, then nevertheless the data value in the tail is moved into the empty internal register in the cycle.
    Type: Grant
    Filed: October 29, 2014
    Date of Patent: April 10, 2018
    Assignee: Netronome Systems, Inc.
    Inventors: Ronald N. Fortino, Gavin J. Stark, Steven W. Zagorianakos
  • Patent number: 9804959
    Abstract: A method for supporting in-flight packet processing is provided. Packet processing devices (microengines) can send a request for packet processing to a packet engine before a packet comes in. The request offers a twofold benefit. First, the microengines add themselves to a work queue to request for processing. Once the packet becomes available, the header portion is automatically provided to the corresponding microengine for packet processing. Only one bus transaction is involved in order for the microengines to start packet processing. Second, the microengines can process packets before the entire packet is written into the memory. This is especially useful for large sized packets because the packets do not have to be written into the memory completely when processed by the microengines.
    Type: Grant
    Filed: October 31, 2014
    Date of Patent: October 31, 2017
    Assignee: Netronome Systems, Inc.
    Inventors: Salma Mirza, Steven W. Zagorianakos, Gavin J. Stark
  • Patent number: 9729353
    Abstract: An NFA hardware engine includes a pipeline and a controller. The pipeline includes a plurality of stages, where one of the stages includes a transition table. Both a first automaton and a second automaton are encoded in the same transition table. The controller receives NFA engine commands onto the NFA engine and controls the pipeline in response to the NFA engine commands.
    Type: Grant
    Filed: January 9, 2014
    Date of Patent: August 8, 2017
    Assignee: Netronome Systems, Inc.
    Inventors: Gavin J. Stark, Steven W. Zagorianakos
  • Patent number: 9703739
    Abstract: In response to receiving a novel “Return Available PPI Credits” command from a credit-aware device, a packet engine sends a “Credit To Be Returned” (CTBR) value it maintains for that device back to the credit-aware device, and zeroes out its stored CTBR value. The credit-aware device adds the credits returned to a “Credits Available” value it maintains. The credit-aware device uses the “Credits Available” value to determine whether it can issue a PPI allocation request. The “Return Available PPI Credits” command does not result in any PPI allocation or de-allocation. In another novel aspect, the credit-aware device is permitted to issue one PPI allocation request to the packet engine when its recorded “Credits Available” value is zero or negative. If the PPI allocation request cannot be granted, then it is buffered in the packet engine, and is resubmitted within the packet engine, until the packet engine makes the PPI allocation.
    Type: Grant
    Filed: January 6, 2015
    Date of Patent: July 11, 2017
    Assignee: Netronome Systems, Inc.
    Inventors: Salma Mirza, Gavin J. Stark, Steven W. Zagorianakos
  • Patent number: 9665519
    Abstract: In response to receiving a “Return Available PPI Credits” command from a credit-aware (CA) device, a packet engine sends a “Credit To Be Returned” (CTBR) value it maintains for that device back to the CA device, and zeroes out its stored CTBR value. The CA device adds the credits returned to a “Credits Available” value it maintains. The CA device uses the “Credits Available” value to determine whether it can issue a PPI allocation request. The “Return Available PPI Credits” command does not result in any PPI allocation or de-allocation. In another aspect, the CA device issues one PPI allocation request to the packet engine when its recorded “Credits Available” value is zero or negative. If the PPI allocation request cannot be granted, then it is buffered in the packet engine, and is resubmitted within the packet engine, until the packet engine makes the PPI allocation.
    Type: Grant
    Filed: January 7, 2015
    Date of Patent: May 30, 2017
    Assignee: Netronome Systems, Inc.
    Inventors: Salma Mirza, Gavin J. Stark, Steven W. Zagorianakos
  • Patent number: 9584637
    Abstract: Circuitry to provide in-order packet delivery. A packet descriptor including a sequence number is received. It is determined in which of three ranges the sequence number resides. Depending, at least in part, on the range in which the sequence number resides it is determined if the packet descriptor is to be communicated to a scheduler which causes an associated packet to be transmitted. If the sequence number resides in a first “flush” range, all associated packet descriptors are output. If the sequence number resides in a second “send” range, only the received packet descriptor is output. If the sequence number resides in a third “store and reorder” range and the sequence number is the next in-order sequence number the packet descriptor is output; if the sequence number is not the next in-order sequence number the packet descriptor is stored in a buffer and a corresponding valid bit is set.
    Type: Grant
    Filed: February 19, 2014
    Date of Patent: February 28, 2017
    Assignee: Netronome Systems, Inc.
    Inventors: Ron Lamar Swartzentruber, Steven W. Zagorianakos, Gavin J. Stark
  • Patent number: 9558224
    Abstract: An automaton hardware engine employs a transition table organized into 2n rows, where each row comprises a plurality of n-bit storage locations, and where each storage location can store at most one n-bit entry value. Each row corresponds to an automaton state. In one example, at least two NFAs are encoded into the table. The first NFA is indexed into the rows of the transition table in a first way, and the second NFA is indexed in to the rows of the transition table in a second way. Due to this indexing, all rows are usable to store entry values that point to other rows.
    Type: Grant
    Filed: January 9, 2014
    Date of Patent: January 31, 2017
    Assignee: Netronome Systems, Inc.
    Inventors: Gavin J. Stark, Steven W. Zagorianakos
  • Patent number: 9465651
    Abstract: A remote processor interacts with a transactional memory that has a memory, local BWC (Byte-Wise Compare) resources, and local NFA (Non-deterministic Finite Automaton) engine resources. The processor causes a byte stream to be transferred into the transactional memory and into the memory. The processor then uses the BWC circuit to find a character signature in the byte stream. The processor obtains information about the character signature from the BWC circuit, and based on the information uses the NFA engine to process the byte stream starting at a byte position determined based at least in part on the results of the BWC circuit. From the time the byte stream is initially written into the transactional memory until the time the NFA engine completes, the byte stream is not read out of the transactional memory.
    Type: Grant
    Filed: January 9, 2014
    Date of Patent: October 11, 2016
    Assignee: Netronome Systems, Inc.
    Inventors: Gavin J. Stark, Steven W. Zagorianakos
  • Patent number: 9417656
    Abstract: An NFA (Non-deterministic Finite Automaton) circuit includes a hardware byte characterizer, a first matching circuit (performs a TCAM match function), a second matching circuit (performs a wide match function), a multiplexer that outputs a selected output from either the first or second matching circuits, and a storage device. N data values stored in first storage locations of the storage device are supplied to the first matching circuit as an N-bit mask value and are simultaneously supplied to the second matching circuit as N bits of an N+O-bit mask value. O data values stored in second storage locations of the storage device are supplied to the first matching circuit as the O-bit match value and are simultaneously supplied to the second matching circuit as O bits of the N+O-bit mask value. P data values stored in third storage locations are supplied onto the select inputs of the multiplexer.
    Type: Grant
    Filed: January 9, 2014
    Date of Patent: August 16, 2016
    Assignee: Netronome Systems, Inc.
    Inventors: Gavin J. Stark, Steven W. Zagorianakos
  • Patent number: 9413665
    Abstract: Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing and handling the storing of packet portions into the memory, a packet engine is provided. A device interacting with the packet engine can use a PPI (Packet Portion Identifier) Addressing Mode (PAM) in communicating with the packet engine and in instructing the packet engine to store packet portions. Alternatively, the device can use a Linear Addressing Mode (LAM) to communicate with the packet engine. A PAM/LAM selection code field in a bus transaction value sent to the packet engine indicates whether PAM or LAM will be used.
    Type: Grant
    Filed: August 20, 2014
    Date of Patent: August 9, 2016
    Assignee: Netronome Systems, Inc.
    Inventors: Salma Mirza, Gavin J. Stark, Steven W. Zagorianakos
  • Patent number: 9405713
    Abstract: The functional circuitry of a network flow processor is partitioned into a number of rectangular islands. The islands are disposed in rows. A configurable mesh data bus extends through the islands. A first island includes a first memory and a first data bus interface. A second island includes a processor, a second memory, and a second data bus interface. The processor can issue a command for a target memory to do an action. If a field in the command has a first value then the target memory is the first memory, whereas if the field has a second value then the target memory is in the second memory. The command format is the same, regardless of whether the target memory is local or remote. If the target memory is remote, then a data bus bridge adds destination information before putting the command onto the global configurable mesh data bus.
    Type: Grant
    Filed: February 17, 2012
    Date of Patent: August 2, 2016
    Assignee: Netronome Systems, Inc.
    Inventors: Gavin J. Stark, Steven W. Zagorianakos
  • Patent number: 9401880
    Abstract: An island-based network flow processor (IB-NFP) integrated circuit includes islands organized in rows. A configurable mesh event bus extends through the islands and is configured to form a local event ring. The configurable mesh event bus is configured with configuration information received via a configurable mesh control bus. The local event ring involves event ring circuits and event ring segments. In one example, a packet is received onto a first island. If an amount of a processing resource (for example, memory buffer space) available to the first island is below a threshold, then an event packet is communicated from the first island to a second island via the local event ring. In response, the second island causes a third island to communicate via a command/push/pull data bus with the first island, thereby increasing the amount of the processing resource available to the first island for handing incoming packets.
    Type: Grant
    Filed: December 31, 2014
    Date of Patent: July 26, 2016
    Assignee: Netronome Systems, Inc.
    Inventors: Gavin J. Stark, Steven W. Zagorianakos, Ron L. Swartzentruber, Richard P. Bouley
  • Publication number: 20160124772
    Abstract: A method for supporting in-flight packet processing is provided. Packet processing devices (microengines) can send a request for packet processing to a packet engine before a packet comes in. The request offers a twofold benefit. First, the microengines add themselves to a work queue to request for processing. Once the packet becomes available, the header portion is automatically provided to the corresponding microengine for packet processing. Only one bus transaction is involved in order for the microengines to start packet processing. Second, the microengines can process packets before the entire packet is written into the memory. This is especially useful for large sized packets because the packets do not have to be written into the memory completely when processed by the microengines.
    Type: Application
    Filed: October 31, 2014
    Publication date: May 5, 2016
    Inventors: Salma Mirza, Steven W. Zagorianakos, Gavin J. Stark
  • Publication number: 20160057058
    Abstract: Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing and handling the storing of packet portions into the memory, a packet engine is provided. A device interacting with the packet engine can use a PPI (Packet Portion Identifier) Addressing Mode (PAM) in communicating with the packet engine and in instructing the packet engine to store packet portions. Alternatively, the device can use a Linear Addressing Mode (LAM) to communicate with the packet engine. A PAM/LAM selection code field in a bus transaction value sent to the packet engine indicates whether PAM or LAM will be used.
    Type: Application
    Filed: August 20, 2014
    Publication date: February 25, 2016
    Inventors: Salma Mirza, Gavin J. Stark, Steven W. Zagorianakos
  • Publication number: 20160055111
    Abstract: In response to receiving a novel “Return Available PPI Credits” command from a credit-aware device, a packet engine sends a “Credit To Be Returned” (CTBR) value it maintains for that device back to the credit-aware device, and zeroes out its stored CTBR value. The credit-aware device adds the credits returned to a “Credits Available” value it maintains. The credit-aware device uses the “Credits Available” value to determine whether it can issue a PPI allocation request. The “Return Available PPI Credits” command does not result in any PPI allocation or de-allocation. In another novel aspect, the credit-aware device is permitted to issue one PPI allocation request to the packet engine when its recorded “Credits Available” value is zero or negative. If the PPI allocation request cannot be granted, then it is buffered in the packet engine, and is resubmitted within the packet engine, until the packet engine makes the PPI allocation.
    Type: Application
    Filed: January 7, 2015
    Publication date: February 25, 2016
    Inventors: Salma Mirza, Gavin J. Stark, Steven W. Zagorianakos
  • Publication number: 20160055112
    Abstract: In response to receiving a novel “Return Available PPI Credits” command from a credit-aware device, a packet engine sends a “Credit To Be Returned” (CTBR) value it maintains for that device back to the credit-aware device, and zeroes out its stored CTBR value. The credit-aware device adds the credits returned to a “Credits Available” value it maintains. The credit-aware device uses the “Credits Available” value to determine whether it can issue a PPI allocation request. The “Return Available PPI Credits” command does not result in any PPI allocation or de-allocation. In another novel aspect, the credit-aware device is permitted to issue one PPI allocation request to the packet engine when its recorded “Credits Available” value is zero or negative. If the PPI allocation request cannot be granted, then it is buffered in the packet engine, and is resubmitted within the packet engine, until the packet engine makes the PPI allocation.
    Type: Application
    Filed: January 6, 2015
    Publication date: February 25, 2016
    Inventors: Salma Mirza, Gavin J. Stark, Steven W. Zagorianakos