Patents Assigned to Innovium, Inc.
  • Patent number: 11652744
    Abstract: Approaches, techniques, and mechanisms are disclosed for maintaining efficient representations of prefix tables for utilization by network switches and other devices. In an embodiment, the performance of a network device is greatly enhanced using a working representation of a prefix table that includes multiple stages of prefix entries. Higher-stage prefixes are stored in slotted pools. Mapping logic, such as a hash function, determines the slots in which a given higher-stage prefix may be stored. When trying to find a longest-matching higher-stage prefix for an input key, only the slots that map to that input key need be read. Higher-stage prefixes may further point to arrays of lower-stage prefixes. Hence, once a longest-matching higher-stage prefix is found for an input key, the longest prefix match in the table may be found simply by comparing the input key to lower-stage prefixes in the array that the longest-matching higher-stage prefix points to.
    Type: Grant
    Filed: October 4, 2021
    Date of Patent: May 16, 2023
    Assignee: Innovium, Inc.
    Inventor: Srinivas Gangam
  • Patent number: 11652750
    Abstract: Packet-switching operations in a network device are managed based on the detection of excessive-rate traffic flows. A network device receives a data unit, determines the traffic flow to which the data unit belongs, and updates flow tracking information for that flow. The network device utilizes the tracking information to determine when a rate at which the network device is receiving data belonging to the flow exceeds an excessive-rate threshold and is thus an excessive-rate flow. The network device may enable one or more excessive-rate policies on an excessive-rate traffic flow. Such a policy may include any number of features that affect how the device handles data units belonging to the flow, such as excessive-rate notification, differentiated discard, differentiated congestion notification, and reprioritization. Memory and other resource optimizations for such flow tracking and management are also described.
    Type: Grant
    Filed: January 20, 2022
    Date of Patent: May 16, 2023
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Rupa Budhia, Puneet Agarwal
  • Patent number: 11637786
    Abstract: When a measure of buffer space queued for garbage collection in a network device grows beyond a certain threshold, one or more actions are taken to decreasing an enqueue rate of certain classes of traffic, such as of multicast traffic, whose reception may have caused and/or be likely to exacerbate garbage-collection-related performance issues. When the amount of buffer space queued for garbage collection shrinks to an acceptable level, these one or more actions may be reversed. In an embodiment, to more optimally handle multi-destination traffic, queue admission control logic for high-priority multi-destination data units, such as mirrored traffic, may be performed for each destination of the data units prior to linking the data units to a replication queue. If a high-priority multi-destination data unit is admitted to any queue, the high-priority multi-destination data unit can no longer be dropped, and is linked to a replication queue for replication.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: April 25, 2023
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Bruce Hui Kwan, Ajit Kumar Jain
  • Patent number: 11621904
    Abstract: A switch or other network device may be configured as an ingress edge telemetry node in a telemetry domain. The ingress edge telemetry node may clone certain data units it processes, for example in response to certain telemetry triggers being met. The ingress edge telemetry node may further inject telemetry and/or other data into the cloned data unit. The cloned data unit continues along the same path as the original data unit until it reaches an egress edge telemetry node in the telemetry domain. The second node extracts the telemetry data from the cloned data unit and sends telemetry information based thereon to a telemetry collector, while the original data unit continues to its final destination. Nodes along the path between the first node and the second node may be configured as transit telemetry nodes that insert or otherwise update the telemetry data.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: April 4, 2023
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Meg Pei Lin, Rupa Budhia
  • Patent number: 11570127
    Abstract: An ingress packet processor in a device corresponds to a group of ports and receives network packets from ports in its port group. A traffic manager in the device manages buffers storing packet data for transmission to egress packet processors. An ingress arbiter is associated with a port group and connects the port group to an ingress packet processor coupled to the ingress arbiter. The ingress arbiter determines a traffic rate at which the associated ingress packet processor transmits packets to the traffic manager. The ingress arbiter controls an associated traffic shaper to generate a number of tokens that are assigned to the port group. Upon receiving packet data from a port in the group, the ingress arbiter determines, using information from the traffic shaper, whether a token is available. Conditioned on determining that a token is available, the ingress arbiter forwards the packet data to the ingress packet processor.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: January 31, 2023
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal
  • Patent number: 11567560
    Abstract: Power demands of a computing system, such as a network device and/or a component thereof, are stabilized by introducing a programmable delay into identical or substantially similar subsystems within an integrated circuit. Each subsystem reads a potentially different delay value from an associated storage, memory, or input, and waits for some time indicated by the delay value before beginning execution. For example, in a group of identical subsystems that process data concurrently, some or all of the subsystems begin processing their respective data after a different amount of delay, thus staggering their respective executions and lowering the risk of aligned edges when some or all of the subsystems concurrently step their power demands up or down. This, in turn, reduces peak power and voltage. In an embodiment, rather than being fixed at the design stage, each subsystem's delay value is programmable at some point after fabrication.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: January 31, 2023
    Assignee: Innovium, Inc.
    Inventors: Keith Michael Ring, Mohammad Kamel Issa
  • Patent number: 11522817
    Abstract: An improved buffer for networking and other computing devices comprises multiple memory instances, each having a distinct set of entries. Transport data units (“TDUs”) are divided into storage data units (“SDUs”), and each SDU is stored within a separate entry of a separate memory instance in a logical bank. One or more grids of the memory instances are organized into overlapping logical banks. The logical banks are arranged into views. Different destinations or other entities are assigned different views of the buffer. A memory instance may be shared between logical banks in different views. When overlapping logical banks are accessed concurrently, data in a memory instance that they share may be recovered using a parity SDU in another memory instance. The shared buffering enables more efficient buffer usage in a network device with a traffic manager shared amongst egress bocks. Example read and write algorithms for such buffers are disclosed.
    Type: Grant
    Filed: January 4, 2021
    Date of Patent: December 6, 2022
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal
  • Patent number: 11516149
    Abstract: Distributed machine learning systems and other distributed computing systems are improved by compute logic embedded in extension modules coupled directly to network switches. The compute logic performs collective actions, such as reduction operations, on gradients or other compute data processed by the nodes of the system. The reduction operations may include, for instance, summation, averaging, bitwise operations, and so forth. In this manner, the extension modules may take over some or all of the processing of the distributed system during the collective phase. An inline version of the module sits between a switch and the network. Data units carrying compute data are intercepted and processed using the compute logic, while other data units pass through the module transparently to or from the switch. Multiple modules may be connected to the switch, each coupled to a different group of nodes, and sharing intermediate results. A sidecar version is also described.
    Type: Grant
    Filed: July 3, 2021
    Date of Patent: November 29, 2022
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal
  • Patent number: 11489785
    Abstract: A network traffic manager receives, from an ingress port in a group of ingress ports, a cell of a packet destined for an egress port. Upon determining that a number of cells of the packet stored in a buffer queue meets a threshold value, the manager checks whether the group of ingress ports has been assigned a token for the queue. Upon determining that the group of ingress ports has been assigned the token, the manager determines that other cells of the packet are stored in the buffer, and accordingly stores the received cell in the buffer, and stores linking information for the received cell in a receive context for the packet. When all cells of the packet have been received, the manager copies linking information for the packet cells to the buffer queue or a copy generator queue, and releases the token from the group of ingress ports.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: November 1, 2022
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Bruce H. Kwan, Ajit K. Jain
  • Patent number: 11483232
    Abstract: Automatic load-balancing techniques in a network device are used to select, from a multipath group, a path to assign to a flow based on observed state attributes such as path state(s), device state(s), port state(s), or queue state(s) of the paths. A mapping of the path previously assigned to a flow or group of flows (e.g., on account of having then been optimal in view of the observed state attributes) is maintained, for example, in a table. So long as the flow(s) are active and the path is still valid, the mapped path is selected for subsequent data units belonging to the flow(s), which may, among other effects, avoid or reduce packet re-ordering. However, if the flow(s) go idle, or if the mapped path fails, a new optimal path may be assigned to the flow(s) from the multipath group.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: October 25, 2022
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Rupa Budhia
  • Patent number: 11481350
    Abstract: Network chip utility is improved using multi-core architectures with auxiliary wiring between cores to permit cores to utilize components from otherwise inactive cores. The architectures permit, among other advantages, the re-purposing of functional components that reside in defective or otherwise non-functional cores. For instance, a four-core network chip with certain defects in three or even four cores could still, through operation of the techniques described herein, be utilized in a two or even three-core capacity. In an embodiment, the auxiliary wiring may be used to redirect data from a Serializer/Deserializer (“SerDes”) block of a first core to packet-switching logic on a second core, and vice-versa. In an embodiment, the auxiliary wiring may be utilized to circumvent defective components in the packet-switching logic itself. In an embodiment, a core may utilize buffer memories, forwarding tables, or other resources from other cores instead of or in addition to its own.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: October 25, 2022
    Assignee: Innovium, Inc.
    Inventors: Srinivas Gangam, Ajit Kumar Jain, Anurag Kumar Jain, Avinash Gyanendra Mani, Mohammad Kamel Issa
  • Patent number: 11470016
    Abstract: Approaches, techniques, and mechanisms are disclosed for efficiently buffering data units within a network device. A traffic manager or other network device component receives Transport Data Units (“TDUs”), which are sub-portions of Protocol Data Units (“PDUs”). Rather than buffer an entire TDU together, the component divides the TDU into multiple Storage Data Units (“SDUs”) that can fit in SDU buffer entries within physical memory banks. A TDU-to-SDU Mapping (“TSM”) memory stores TSM lists that indicate which SDU entries store SDUs for a given TDU. Physical memory banks in which the SDUs are stored may be grouped together into logical SDU banks that are accessed together as if a single bank. The TSM memory may include a number of distinct TSM banks, with each logical SDU bank having a corresponding TSM bank. Techniques for maintaining inter-packet and intra-packet linking data compatible with such buffers are also disclosed.
    Type: Grant
    Filed: July 9, 2020
    Date of Patent: October 11, 2022
    Assignee: Innovium, Inc.
    Inventors: Ajit Kumar Jain, Mohammad Kamel Issa, Avinash Gyanendra Mani, Ashwin Alapati
  • Patent number: 11425195
    Abstract: Efficient scaling of in-network compute operations to large numbers of compute nodes is disclosed. Each compute node is connected to a same plurality of network compute nodes, such as compute-enabled network switches. Compute processes at the compute nodes generate local gradients or other vectors by, for instance, performing a forward pass on a neural network. Each vector comprises values for a same set of vector elements. Each network compute node is assigned to, based on the local vectors, reduce vector data for a different a subset of the vector elements. Each network compute node returns a result chunk for the elements it processed back to each of the compute nodes, whereby each compute node receives the full result vector. This configuration may, in some embodiments, reduce buffering, processing, and/or other resource requirements for the network compute node or network at large.
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: August 23, 2022
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Bruce Hui Kwan
  • Patent number: 11328222
    Abstract: Distributed machine learning systems and other distributed computing systems are improved by embedding compute logic at the network switch level to perform collective actions, such as reduction operations, on gradients or other data processed by the nodes of the system. The switch is configured to recognize data units that carry data associated with a collective action that needs to be performed by the distributed system, referred to herein as “compute data,” and process that data using a compute subsystem within the switch. The compute subsystem includes a compute engine that is configured to perform various operations on the compute data, such as “reduction” operations, and forward the results back to the compute nodes. The reduction operations may include, for instance, summation, averaging, bitwise operations, and so forth. In this manner, the network switch may take over some or all of the processing of the distributed system during the collective phase.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: May 10, 2022
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal
  • Patent number: 11307773
    Abstract: According to an embodiment, power demands of a computing device or component thereof may be stabilized by performing redundant operations during periods of otherwise low power demand. In so doing, the current load of the device/component remains relatively stable, potentially greatly reducing voltage droops and overshoots. This can reduce the peak voltage and peak power rating of the device/component. In certain embodiments, such as in network switches and routers, the redundant operations may include queries against a content addressable memory (CAM), such as a ternary content addressable memory (TCAM). Moreover, in an embodiment the queries may be designed to always, or at least be highly likely to, miss the entries in the CAM, thereby ensuring maximum power usage. In another embodiment, the redundant operations include read operations on a random access memory (RAM). In other embodiments, redundant operations may be performed with respect to other power-intensive subsystems.
    Type: Grant
    Filed: April 3, 2019
    Date of Patent: April 19, 2022
    Assignee: Innovium, Inc.
    Inventors: Keith Michael Ring, Mohammad Kamel Issa
  • Patent number: 11265268
    Abstract: The technology described in this document can be embodied in an integrated circuit device comprises a first data processing unit comprising one or more input ports for receiving incoming data, one or more inter-unit data links that couple the first data processing unit to one or more other data processing units, a first ingress management module connected to the one or more inter-unit data links, the first ingress management module configured to store the incoming data, and forward the stored data to the one or more inter-unit data links as multiple data packets, and a first ingress processing module. The integrated circuit device also comprises a second data processing unit comprising one or more output ports for transmitting outgoing data, and a second ingress management module connected to the one or more inter-unit data links.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: March 1, 2022
    Assignee: Innovium, Inc.
    Inventors: Ajit K. Jain, Avinash Gyanendra Mani, Mohammad Kamel Issa
  • Patent number: 11245632
    Abstract: Packet-switching operations in a network device are managed based on the detection of excessive-rate traffic flows. A network device receives a data unit, determines the traffic flow to which the data unit belongs, and updates flow tracking information for that flow. The network device utilizes the tracking information to determine when a rate at which the network device is receiving data belonging to the flow exceeds an excessive-rate threshold and is thus an excessive-rate flow. The network device may enable one or more excessive-rate policies on an excessive-rate traffic flow. Such a policy may include any number of features that affect how the device handles data units belonging to the flow, such as excessive-rate notification, differentiated discard, differentiated congestion notification, and reprioritization. Memory and other resource optimizations for such flow tracking and management are also described.
    Type: Grant
    Filed: July 13, 2020
    Date of Patent: February 8, 2022
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Rupa Budhia, Puneet Agarwal
  • Patent number: 11201831
    Abstract: Multiple ports of a network device are muxed together to form a single packed ingress interface into a buffer. A multiplexor alternates between the ports in alternating input clock cycles. Extra logic and wiring to provide a separate writer for each port is avoided, since the packed interface and buffer writers operate at higher speeds and/or have more bandwidth than the ports, and are thus able to handle incoming data for all of the ports coupled to the packed ingress interface. A packed ingress interface may also or instead support receiving data for multiple data units (e.g. multiple packets) from a single port in a single clock cycle, thereby reducing the potential to waste bandwidth at the end of data units. The interface may send the ending segments of the first data unit to the buffer. However, the interface may hold back the starting segments of the second data unit in a cache.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: December 14, 2021
    Assignee: Innovium, Inc.
    Inventor: Ajit Kumar Jain
  • Patent number: 11171890
    Abstract: An ingress packet processor in a device corresponds to a group of ports and receives network packets from ports in its port group. A traffic manager in the device manages buffers storing packet data for transmission to egress packet processors. An ingress arbiter is associated with a port group and connects the port group to an ingress packet processor coupled to the ingress arbiter. The ingress arbiter determines a traffic rate at which the associated ingress packet processor transmits packets to the traffic manager. The ingress arbiter controls an associated traffic shaper to generate a number of tokens that are assigned to the port group. Upon receiving packet data from a port in the group, the ingress arbiter determines, using information from the traffic shaper, whether a token is available. Conditioned on determining that a token is available, the ingress arbiter forwards the packet data to the ingress packet processor.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: November 9, 2021
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal
  • Patent number: 11159455
    Abstract: Ingress packet processors in a device receive network packets from ingress ports. A crossbar in the device receives, from the ingress packet processors, packet data of the packets and transmits information about the packet data to a plurality of traffic managers in the device. Each traffic manager computes a total amount of packet data to be written to buffers across the plurality of traffic managers, where each traffic manager manages one or more buffers that store packet data. Each traffic manager compares the total amount of packet data to one or more threshold values. Upon determining that the total amount of packet data is equal to or greater than a threshold value, each traffic manager drops a portion of the packet data, and writes a remaining portion of the packet data to the buffers managed by the traffic manager.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: October 26, 2021
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Mohammad Kamel Issa, Ajit K. Jain