Patents by Inventor Michael Brian Galles

Michael Brian Galles has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240080279
    Abstract: A network appliance receives a network packet and determines an application identifier for the network packet. Key specification fetching circuits in the processing stages of the network appliance's match-action pipelines can use the application identifiers to read key specifications. The key specifications are stored in memory and may be cached near the processing stages. Key construction circuits in the processing stages can use the key specifications to construct keys. The processing stages can process the network based on the keys because the keys may be used to obtain action indicators from match-action tables. As such, the processing stages can construct and use keys that may be dynamically defined by storing their key specifications in memory.
    Type: Application
    Filed: September 6, 2022
    Publication date: March 7, 2024
    Inventors: Michael Brian Galles, Neil Barrett
  • Patent number: 11907751
    Abstract: Described are platforms, systems, and methods for resource fairness enforcement. In one aspect, a programmable input output (IO) device comprises a memory unit, the memory unit having instructions stored thereon which, when executed by the programmable IO device, cause the programmable IO device to perform operations comprising: receiving an input from a logical interface (LIF); determining, by at least one meter, a metric regarding at least one resource used during a processing of the input through a programmable pipeline; and regulating additional input received from the LIF based on the metric and a threshold for the at least one resource.
    Type: Grant
    Filed: December 19, 2022
    Date of Patent: February 20, 2024
    Assignee: Pensando Systems, Inc.
    Inventor: Michael Brian Galles
  • Patent number: 11902184
    Abstract: PCIe devices installed in host computers communicating with service nodes can provide virtualized NVMe over fabric services. A workload on the host computer can submit an SQE on a NVMe SQ. The PCI device can read the SQE to obtain a command identifier, an OpCode, and a namespace identifier (NSID). The SQE can be used to produce a LTP packet that includes the opcode, the NSID, and a request identifier. The LTP packet can be sent to the service node, which may access a SAN in accordance with the opcode and NSID, and can respond to the LTP with a second LTP that includes the request identifier and a status indicator. The PCI device can use the status indicator and the request identifier to produce a CQE that is placed on a NVMe CQ associated with the SQ.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: February 13, 2024
    Assignee: Pensando Systems Inc.
    Inventors: Silvano Gai, Michael Brian Galles, Mario Mazzola, Luca Cafiero, Krishna Doddapaneni, Sarat Kamisetty
  • Patent number: 11863467
    Abstract: A network appliance can have an input port that can receive network packets at line rate, two or more ingress queues, a line rate classification circuit that can place the network packets on the ingress queues at the line rate, a packet buffer that can store the network packets, and a sub line rate packet processing circuit that can process the network packets that are stored in the packet buffer. The line rate classification circuit can place a network packet on one of the ingress queues based on the network packet's packet contents. A buffer scheduler can select network packets for processing by a sub line rate packet processing circuit based on the priority levels of the ingress queues.
    Type: Grant
    Filed: January 20, 2022
    Date of Patent: January 2, 2024
    Assignee: PENSANDO SYSTEMS INC.
    Inventors: Michael Brian Galles, Vipin Jain
  • Publication number: 20230231818
    Abstract: A network appliance can have an input port that can receive network packets at line rate, two or more ingress queues, a line rate classification circuit that can place the network packets on the ingress queues at the line rate, a packet buffer that can store the network packets, and a sub line rate packet processing circuit that can process the network packets that are stored in the packet buffer. The line rate classification circuit can place a network packet on one of the ingress queues based on the network packet's packet contents. A buffer scheduler can select network packets for processing by a sub line rate packet processing circuit based on the priority levels of the ingress to queues.
    Type: Application
    Filed: January 20, 2022
    Publication date: July 20, 2023
    Inventors: Michael Brian Galles, Vipin Jain
  • Publication number: 20230121317
    Abstract: Described are platforms, systems, and methods for resource fairness enforcement. In one aspect, a programmable input output (IO) device comprises a memory unit, the memory unit having instructions stored thereon which, when executed by the programmable IO device, cause the programmable IO device to perform operations comprising: receiving an input from a logical interface (LIF); determining, by at least one meter, a metric regarding at least one resource used during a processing of the input through a programmable pipeline; and regulating additional input received from the LIF based on the metric and a threshold for the at least one resource.
    Type: Application
    Filed: December 19, 2022
    Publication date: April 20, 2023
    Inventor: Michael Brian GALLES
  • Patent number: 11593136
    Abstract: Described are platforms, systems, and methods for resource fairness enforcement. In one aspect, a programmable input output (IO) device comprises a memory unit, the memory unit having instructions stored thereon which, when executed by the programmable IO device, cause the programmable IO device to perform operations comprising: receiving an input from a logical interface (LIF); determining, by at least one meter, a metric regarding at least one resource used during a processing of the input through a programmable pipeline; and regulating additional input received from the LIF based on the metric and a threshold for the at least one resource.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: February 28, 2023
    Assignee: Pensando Systems, Inc.
    Inventor: Michael Brian Galles
  • Patent number: 11595502
    Abstract: Certain tasks related to processing layer 7 (L7) data streams, such as HTTP data streams, can be performed by an L7 assist circuit instead of by general-purpose CPUs. The L7 assist circuit can normalize URLs, Huffman decode, Huffman encode, and generate hashes of normalized URLs. A L7 data stream, which is reassembled from received network packets, includes an L7 header. L7 assist produces an augmented L7 header that is added to the L7 data stream. The CPUs can use the augmented L7 header, thereby speeding up processing. On the outbound path, L7 assist can remove the augmented L7 header and perform Huffman encoding such that the CPUs can perform other tasks.
    Type: Grant
    Filed: October 15, 2020
    Date of Patent: February 28, 2023
    Assignee: Pensando Systems Inc.
    Inventors: Michael Brian Galles, Hemant Vinchure
  • Patent number: 11593294
    Abstract: PCIe devices installed in host computers communicating with service nodes can provide virtualized and high availability PCIe functions to host computer workloads. The PCIe device can receive a PCIe TLP encapsulated in a PCIe DLLP via a PCIe bus. The TLP includes a TLP address value, a TLP requester identifier, and a TLP type. The PCIe device can terminate the PCIe transaction by sending a DLLP ACK message to the host computer in response to receiving the TLP. The TLP packet can be used to create a workload request capsule that includes a request type indicator, an address offset, and a workload request identifier. A workload request packet that includes the workload request capsule can be sent to a virtualized service endpoint. The service node, implementing the virtualized service endpoint, receives a workload response packet that includes the workload request identifier and a workload response payload.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: February 28, 2023
    Assignee: Pensando Systems Inc.
    Inventors: Michael Brian Galles, Silvano Gai, Mario Mazzola, Luca Cafiero, Francis Matus, Krishna Doddapaneni, Sarat Kamisetty
  • Publication number: 20220374379
    Abstract: PCIe devices installed in host computers communicating with service nodes can provide virtualized and high availability PCIe functions to host computer workloads. The PCIe device can receive a PCIe TLP encapsulated in a PCIe DLLP via a PCIe bus. The TLP includes a TLP address value, a TLP requester identifier, and a TLP type. The PCIe device can terminate the PCIe transaction by sending a DLLP ACK message to the host computer in response to receiving the TLP. The TLP packet can be used to create a workload request capsule that includes a request type indicator, an address offset, and a workload request identifier. A workload request packet that includes the workload request capsule can be sent to a virtualized service endpoint. The service node, implementing the virtualized service endpoint, receives a workload response packet that includes the workload request identifier and a workload response payload.
    Type: Application
    Filed: May 20, 2021
    Publication date: November 24, 2022
    Inventors: Michael Brian Galles, Silvano Gai, Mario Mazzola, Luca Cafiero, Francis Matus, Krishna Doddapaneni, Sarat Kamisetty
  • Publication number: 20220377027
    Abstract: PCIe devices installed in host computers communicating with service nodes can provide virtualized NVMe over fabric services. A workload on the host computer can submit an SQE on a NVMe SQ. The PCI device can read the SQE to obtain a command identifier, an OpCode, and a namespace identifier (NSID). The SQE can be used to produce a LTP packet that includes the opcode, the NSID, and a request identifier. The LTP packet can be sent to the service node, which may access a SAN in accordance with the opcode and NSID, and can respond to the LTP with a second LTP that includes the request identifier and a status indicator. The PCI device can use the status indicator and the request identifier to produce a CQE that is placed on a NVMe CQ associated with the SQ.
    Type: Application
    Filed: May 20, 2021
    Publication date: November 24, 2022
    Inventors: Silvano Gai, Michael Brian Galles, Mario Mazzola, Luca Cafiero, Krishna Doddapaneni, Sarat Kamisetty
  • Patent number: 11489773
    Abstract: Methods and devices for processing packets with reduced data stalls are provided. The method comprises: (a) receiving a packet comprising a header portion and a payload portion, wherein the header portion is used to generate a packet header vector; (b) producing a table result by performing packet match operations, wherein the table result is generated based at least in part on the packet header vector and data stored in a match table; (c) receiving, at a match processing unit, the table result and an address of a set of instructions associated with the match table; and (d) performing, by the match processing unit, one or more actions in response to the set of instructions until completion of the instructions, wherein the one or more actions comprise modifying the header portion, updating memory based data structure or initiating an event.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: November 1, 2022
    Assignee: Pensando Systems Inc.
    Inventors: Michael Brian Galles, David Clear
  • Patent number: 11374872
    Abstract: A multitude of data transfer queues can have data transfer operations that are scheduled for a processing circuit to perform. Some of the data transfer queues may submit so many or such large data transfer operations that others receive little or no attention. The situation can be resolved in the data plane via a processing circuit that performs the data transfer operations in conjunction with priority evaluation operations that can assign the data transfer queues to different scheduler priority classes. A scheduler can schedule data transfer operations based on the scheduler priority classes of the data transfer queues.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: June 28, 2022
    Assignee: Pensando Systems, Inc.
    Inventors: Vishwas Danivas, Murty Subba Rama Chandra Kotha, Balakrishnan Raman, Sanjay Shanbhogue, Harinadh Nagulapalli, Michael Brian Galles, Neel Patel
  • Publication number: 20220182331
    Abstract: A multitude of data transfer queues can have data transfer operations that are scheduled for a processing circuit to perform. Some of the data transfer queues may submit so many or such large data transfer operations that others receive little or no attention. The situation can be resolved in the data plane via a processing circuit that performs the data transfer operations in conjunction with priority evaluation operations that can assign the data transfer queues to different scheduler priority classes. A scheduler can schedule data transfer operations based on the scheduler priority classes of the data transfer queues.
    Type: Application
    Filed: December 8, 2020
    Publication date: June 9, 2022
    Inventors: Vishwas DANIVAS, Murty Subba Rama Chandra KOTHA, Balakrishnan RAMAN, Sanjay SHANBHOGUE, Harinadh NAGULAPALLI, Michael Brian GALLES, Neel PATEL
  • Publication number: 20220124182
    Abstract: Certain tasks related to processing layer 7 (L7) data streams, such as HTTP data streams, can be performed by an L7 assist circuit instead of by general-purpose CPUs. The L7 assist circuit can normalize URLs, Huffman decode, Huffman encode, and generate hashes of normalized URLs. A L7 data stream, which is reassembled from received network packets, includes an L7 header. L7 assist produces an augmented L7 header that is added to the L7 data stream. The CPUs can use the augmented L7 header, thereby speeding up processing. On the outbound path, L7 assist can remove the augmented L7 header and perform Huffman encoding such that the CPUs can perform other tasks.
    Type: Application
    Filed: October 15, 2020
    Publication date: April 21, 2022
    Inventors: Michael Brian GALLES, Hemant VINCHURE
  • Publication number: 20220121605
    Abstract: A configurable transaction filtering and logging circuit for on-chip communications within a semiconductor chip can store filter patterns. The filter patterns can include an address range filter pattern. The circuit can monitor transactions carried by an on-chip connection fabric. The transactions can be configured to transfer data between a first core circuit and a second core circuit that are also implemented on the semiconductor chip. The circuit can execute one of a set of actions in response to detecting a transaction that matches one of the filter patterns. One of the actions can be logging the transaction to a transaction log buffer in response to detecting that the transaction matches one of the filter patterns.
    Type: Application
    Filed: October 21, 2020
    Publication date: April 21, 2022
    Inventor: Michael Brian GALLES
  • Patent number: 11288226
    Abstract: A configurable transaction filtering and logging circuit for on-chip communications within a semiconductor chip can store filter patterns. The filter patterns can include an address range filter pattern. The circuit can monitor transactions carried by an on-chip connection fabric. The transactions can be configured to transfer data between a first core circuit and a second core circuit that are also implemented on the semiconductor chip. The circuit can execute one of a set of actions in response to detecting a transaction that matches one of the filter patterns. One of the actions can be logging the transaction to a transaction log buffer in response to detecting that the transaction matches one of the filter patterns.
    Type: Grant
    Filed: October 21, 2020
    Date of Patent: March 29, 2022
    Assignee: Pensando Systems, Inc.
    Inventor: Michael Brian Galles
  • Patent number: 11263158
    Abstract: Methods and apparatuses for a programmable IO device interface are provided. The apparatus may comprise: a first memory unit having a plurality of programs stored thereon, the plurality of programs are associated with a plurality of actions comprising updating memory based data structure, inserting a DMA command or initiating an event; a second memory unit for receiving and storing a table result, and the table result is provided by a table engine configured to perform packet match operations on (i) a packet header vector contained in a header portion and (ii) data stored in a programmable match table; and circuitry for executing a program selected from the plurality of programs in response to the table result and an address received by the apparatus, and the program is executed until completion and the program is associated with the programmable match table.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: March 1, 2022
    Assignee: PENSANDO SYSTEMS INC.
    Inventors: Michael Brian Galles, J. Bradley Smith, Hemant Vinchure
  • Publication number: 20210157621
    Abstract: Described are platforms, systems, and methods for resource fairness enforcement. In one aspect, a programmable input output (IO) device comprises a memory unit, the memory unit having instructions stored thereon which, when executed by the programmable IO device, cause the programmable IO device to perform operations comprising: receiving an input from a logical interface (LIF); determining, by at least one meter, a metric regarding at least one resource used during a processing of the input through a programmable pipeline; and regulating additional input received from the LIF based on the metric and a threshold for the at least one resource.
    Type: Application
    Filed: November 21, 2019
    Publication date: May 27, 2021
    Inventor: Michael Brian GALLES
  • Publication number: 20210103536
    Abstract: Methods and apparatuses for a programmable IO device interface are provided. The apparatus may comprise: a first memory unit having a plurality of programs stored thereon, the plurality of programs are associated with a plurality of actions comprising updating memory based data structure, inserting a DMA command or initiating an event; a second memory unit for receiving and storing a table result, and the table result is provided by a table engine configured to perform packet match operations on (i) a packet header vector contained in a header portion and (ii) data stored in a programmable match table; and circuitry for executing a program selected from the plurality of programs in response to the table result and an address received by the apparatus, and the program is executed until completion and the program is associated with the programmable match table.
    Type: Application
    Filed: February 19, 2019
    Publication date: April 8, 2021
    Inventors: Michael Brian GALLES, J. Bradley SMITH, Hemant VINCHURE