Patents by Inventor Puneet Agarwal

Puneet Agarwal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250150396
    Abstract: Techniques as described herein may be implemented to support selecting a transmission path in a multi-path network link. In an embodiment, respective cumulative data carrying capacities for selected network paths in a group of network paths defining a multi-path group used to forward network packets from a first network node to a second network node are computed. A cumulative capacity comparison value for a received network packet in a flow of network packets is computed based at least in part on a hash value used to distinguish the flow from other flows of network packets. A specific network path is selected from amongst the network paths of the multi-path group, over which to forward the received network packet from the first network node towards the second network node, based on comparing the cumulative capacity comparison value with at least a subset of the cumulative data carrying capacities.
    Type: Application
    Filed: November 6, 2024
    Publication date: May 8, 2025
    Inventors: Rupa BUDHIA, Puneet AGARWAL
  • Patent number: 12289256
    Abstract: Link data is stored in a distributed link descriptor memory (“DLDM”) including memory instances storing protocol data unit (“PDU”) link descriptors (“PLDs”) or cell link descriptors (“CLDs”). Responsive to receiving a request for buffering a current transfer data unit (“TDU”) in a current PDU, a current PLD is accessed in a first memory instance in the DLDM. It is determined whether any data field designated to store address information in connection with a TDU is currently unoccupied within the current PLD. If no data field designated to store address information in connection with a TDU is currently unoccupied within the current PLD, a current CLD is accessed in a second memory instance in the plurality of memory instances of the same DLDM. Current address information in connection with the current TDU is stored in an address data field within the current CLD.
    Type: Grant
    Filed: October 12, 2023
    Date of Patent: April 29, 2025
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: William Brad Matthews, Puneet Agarwal, Ajit Kumar Jain
  • Patent number: 12236323
    Abstract: Distributed machine learning systems and other distributed computing systems are improved by embedding compute logic at the network switch level to perform collective actions, such as reduction operations, on gradients or other data processed by the nodes of the system. The switch is configured to recognize data units that carry data associated with a collective action that needs to be performed by the distributed system, referred to herein as “compute data,” and process that data using a compute subsystem within the switch. The compute subsystem includes a compute engine that is configured to perform various operations on the compute data, such as “reduction” operations, and forward the results back to the compute nodes. The reduction operations may include, for instance, summation, averaging, bitwise operations, and so forth. In this manner, the network switch may take over some or all of the processing of the distributed system during the collective phase.
    Type: Grant
    Filed: June 30, 2023
    Date of Patent: February 25, 2025
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal
  • Patent number: 12231354
    Abstract: A network device obtains measurement data for one or more device attributes or environmental factors, and compares the measurement data to respective ranges specified for the device attributes or the environmental factors. Different ranges for the device attributes or the environmental factors are associated with different operating regions (OREs) classified for the device. The operating state of the network device corresponds to a first ORE of the different OREs, and various tasks performed by the device in the operating state are based on configurations specified by the first ORE. Based on comparing the measurement data, the network device identifies a second ORE that includes ranges for the device attributes or the environmental factors that match the measurement data. The network device transitions the operating state to correspond to the second ORE, and adjusting the tasks performed by the device according to configurations specified by the second ORE.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: February 18, 2025
    Assignee: Marvell Asia Pte Ltd
    Inventors: William Brad Matthews, Puneet Agarwal, Bruce H. Kwan
  • Publication number: 20250055810
    Abstract: A communication network includes first switches interconnected with second switches. Each first switch includes a first integrated circuit (IC) switch chip, downlink ports, and uplink ports. Each second switch includes ports coupled to at least one uplink port of each of the first switches, and a second IC switch chip in an IC package. To permit each second IC switch chip to forward packets amongst a large number of first switches and to reduce a number of external interconnects of the IC package, each second IC switch chip includes sets of multiplexer/demultiplexer circuitry, each multiplexer/demultiplexer circuitry being coupled between an external interconnect, and a set of multiple internal network interfaces of the second IC switch chip. The multiplexer/demultiplexer circuitry demultiplexes a data stream from the external interconnect to multiple internal network interfaces, and multiplexes multiple data streams from the multiple internal network interfaces to the external interconnect.
    Type: Application
    Filed: June 28, 2024
    Publication date: February 13, 2025
    Inventors: Kapil Vishwas Shrikhande, Soren Pedersen, Lenin Kumar Patra, Puneet AGARWAL
  • Publication number: 20240422104
    Abstract: Packet-switching operations in a network device are managed based on the detection of excessive-rate traffic flows. A network device receives a data unit, determines the traffic flow to which the data unit belongs, and updates flow tracking information for that flow. The network device utilizes the tracking information to determine when a rate at which the network device is receiving data belonging to the flow exceeds an excessive-rate threshold and is thus an excessive-rate flow. The network device may enable one or more excessive-rate policies on an excessive-rate traffic flow. Such a policy may include any number of features that affect how the device handles data units belonging to the flow, such as excessive-rate notification, differentiated discard, differentiated congestion notification, and reprioritization. Memory and other resource optimizations for such flow tracking and management are also described.
    Type: Application
    Filed: September 3, 2024
    Publication date: December 19, 2024
    Inventors: William Brad MATTHEWS, Rupa BUDHIA, Puneet AGARWAL
  • Patent number: 12149437
    Abstract: Automatic load-balancing techniques in a network device are used to select, from a multipath group, a path to assign to a flow based on observed state attributes such as path state(s), device state(s), port state(s), or queue state(s) of the paths. A mapping of the path previously assigned to a flow or group of flows (e.g., on account of having then been optimal in view of the observed state attributes) is maintained, for example, in a table. So long as the flow(s) are active and the path is still valid, the mapped path is selected for subsequent data units belonging to the flow(s), which may, among other effects, avoid or reduce packet re-ordering. However, if the flow(s) go idle, or if the mapped path fails, a new optimal path may be assigned to the flow(s) from the multipath group.
    Type: Grant
    Filed: October 12, 2023
    Date of Patent: November 19, 2024
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Rupa Budhia
  • Patent number: 12101260
    Abstract: When a measure of buffer space queued for garbage collection in a network device grows beyond a certain threshold, one or more actions are taken to decreasing an enqueue rate of certain classes of traffic, such as of multicast traffic, whose reception may have caused and/or be likely to exacerbate garbage-collection-related performance issues. When the amount of buffer space queued for garbage collection shrinks to an acceptable level, these one or more actions may be reversed. In an embodiment, to more optimally handle multi-destination traffic, queue admission control logic for high-priority multi-destination data units, such as mirrored traffic, may be performed for each destination of the data units prior to linking the data units to a replication queue. If a high-priority multi-destination data unit is admitted to any queue, the high-priority multi-destination data unit can no longer be dropped, and is linked to a replication queue for replication.
    Type: Grant
    Filed: February 10, 2023
    Date of Patent: September 24, 2024
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Bruce Hui Kwan, Ajit Kumar Jain
  • Patent number: 12081444
    Abstract: Packet-switching operations in a network device are managed based on the detection of excessive-rate traffic flows. A network device receives a data unit, determines the traffic flow to which the data unit belongs, and updates flow tracking information for that flow. The network device utilizes the tracking information to determine when a rate at which the network device is receiving data belonging to the flow exceeds an excessive-rate threshold and is thus an excessive-rate flow. The network device may enable one or more excessive-rate policies on an excessive-rate traffic flow. Such a policy may include any number of features that affect how the device handles data units belonging to the flow, such as excessive-rate notification, differentiated discard, differentiated congestion notification, and reprioritization. Memory and other resource optimizations for such flow tracking and management are also described.
    Type: Grant
    Filed: April 28, 2023
    Date of Patent: September 3, 2024
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Rupa Budhia, Puneet Agarwal
  • Patent number: 12074808
    Abstract: Distributed machine learning systems and other distributed computing systems are improved by compute logic embedded in extension modules coupled directly to network switches. The compute logic performs collective actions, such as reduction operations, on gradients or other compute data processed by the nodes of the system. The reduction operations may include, for instance, summation, averaging, bitwise operations, and so forth. In this manner, the extension modules may take over some or all of the processing of the distributed system during the collective phase. An inline version of the module sits between a switch and the network. Data units carrying compute data are intercepted and processed using the compute logic, while other data units pass through the module transparently to or from the switch. Multiple modules may be connected to the switch, each coupled to a different group of nodes, and sharing intermediate results. A sidecar version is also described.
    Type: Grant
    Filed: November 29, 2022
    Date of Patent: August 27, 2024
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal
  • Patent number: 12068972
    Abstract: A traffic manager is shared amongst two or more egress blocks of a network device, thereby allowing traffic management resources to be shared between the egress blocks. Schedulers within a traffic manager may generate and queue read instructions for reading buffered portions of data units that are ready to be sent to the egress blocks. The traffic manager may be configured to select a read instruction for a given buffer bank from the read instruction queues based on a scoring mechanism or other selection logic. To avoid sending too much data to an egress block during a given time slot, once a data unit portion has been read from the buffer, it may be temporarily stored in a shallow read data cache. Alternatively, a single, non-bank specific controller may determine all of the read instructions and write operations that should be executed in a given time slot.
    Type: Grant
    Filed: June 12, 2023
    Date of Patent: August 20, 2024
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Bruce Hui Kwan
  • Patent number: 12019606
    Abstract: Certain hash-based operations in network devices and other devices, such as mapping and/or lookup operations, are improved by manipulating a hash key prior to executing a hash function on the hash key and/or by manipulating outputs of a hash function. A device may be configured to manipulate hash keys and/or outputs using manipulation logic based on one or more predefined manipulation values. A similar hash-based operation may be performed by multiple devices within a network of computing devices. Different devices may utilize different predefined manipulation values for their respective implementations of the manipulation logic. For instance, each device may assign itself a random mask value for key transformation logic as part of an initialization process when the device powers up and/or each time the device reboots. In an embodiment, described techniques may increase the entropy of hashing function outputs in certain contexts, thereby increasing the effectiveness of certain hashing functions.
    Type: Grant
    Filed: October 1, 2020
    Date of Patent: June 25, 2024
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal
  • Patent number: 12021763
    Abstract: An improved buffer for networking and other computing devices comprises multiple memory instances, each having a distinct set of entries. Transport data units (“TDUs”) are divided into storage data units (“SDUs”), and each SDU is stored within a separate entry of a separate memory instance in a logical bank. One or more grids of the memory instances are organized into overlapping logical banks. The logical banks are arranged into views. Different destinations or other entities are assigned different views of the buffer. A memory instance may be shared between logical banks in different views. When overlapping logical banks are accessed concurrently, data in a memory instance that they share may be recovered using a parity SDU in another memory instance. The shared buffering enables more efficient buffer usage in a network device with a traffic manager shared amongst egress bocks. Example read and write algorithms for such buffers are disclosed.
    Type: Grant
    Filed: July 24, 2023
    Date of Patent: June 25, 2024
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal
  • Patent number: 11968129
    Abstract: A network device organizes packets into various queues, in which the packets await processing. Queue management logic tracks how long certain packet(s), such as a designated marker packet, remain in a queue. Based thereon, the logic produces a measure of delay for the queue, referred to herein as the “queue delay.” Based on a comparison of the current queue delay to one or more thresholds, various associated delay-based actions may be performed, such as tagging and/or dropping packets departing from the queue, or preventing addition enqueues to the queue. In an embodiment, a queue may be expired based on the queue delay, and all packets dropped. In other embodiments, when a packet is dropped prior to enqueue into an assigned queue, copies of some or all of the packets already within the queue at the time the packet was dropped may be forwarded to a visibility component for analysis.
    Type: Grant
    Filed: April 28, 2023
    Date of Patent: April 23, 2024
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Ajit Kumar Jain
  • Patent number: 11943128
    Abstract: A switch or other network device may be configured as an ingress edge telemetry node in a telemetry domain. The ingress edge telemetry node may clone certain data units it processes, for example in response to certain telemetry triggers being met. The ingress edge telemetry node may further inject telemetry and/or other data into the cloned data unit. The cloned data unit continues along the same path as the original data unit until it reaches an egress edge telemetry node in the telemetry domain. The second node extracts the telemetry data from the cloned data unit and sends telemetry information based thereon to a telemetry collector, while the original data unit continues to its final destination. Nodes along the path between the first node and the second node may be configured as transit telemetry nodes that insert or otherwise update the telemetry data.
    Type: Grant
    Filed: February 10, 2023
    Date of Patent: March 26, 2024
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Meg Pei Lin, Rupa Budhia
  • Publication number: 20240071199
    Abstract: Disclosed herein is an AI based system and method for generating warning alerts for a location to be excavated. The method comprises obtaining, from at least one external source, at least one underground asset map of the location to be excavated. For each of the at least one underground asset map, the method comprises locating a region of interest within the underground asset map corresponding to an identified underground utility service provider and extracting at least one feature within the region of interest. The at least one extracted feature is then compared with a plurality of features stored in a repository corresponding to the identified underground utility service provider, to determine a match. In response to the determination, the extracted feature is identified as a risk feature corresponding to the identified underground utility service provider and one or more warning alerts indicative of risk assets are generated.
    Type: Application
    Filed: August 29, 2023
    Publication date: February 29, 2024
    Inventors: Annapurna Sharma, Maheshakumara Shivakumara, Phanindra Reddy Vedikola, Puneet Agarwal, Sumant Kulkarni, Saurabh Bobde, Sakshi Goyal
  • Patent number: 11895015
    Abstract: A packet to be forwarded over a computer network to a destination is received. A group of multiple network paths is available to forward to the packet to the destination. One or more path selection factors are determined to be used to identify a specific network load balancing algorithm to select a specific network path from the group of multiple network paths. The one or more path selection factors include at least one path selection factor determined based at least in part on a dynamic state of the computer network or a network node in the computer network. In response to selecting, by the specific network load balancing algorithm, the specific network path from among the group of multiple network paths, the packet is forwarded over the specific network path.
    Type: Grant
    Filed: October 28, 2022
    Date of Patent: February 6, 2024
    Assignee: Marvell Asia Pte Ltd
    Inventors: Rupa Budhia, William Brad Matthews, Puneet Agarwal
  • Publication number: 20240039852
    Abstract: Approaches, techniques, and mechanisms are disclosed for improving operations of a network switching device and/or network-at-large by utilizing queue delay as a basis for measuring congestion for the purposes of Automated Queue Management (“AQM”) and/or other congestion-based policies. Queue delay is an exact or approximate measure of the amount of time a data unit waits at a network device as a consequence of queuing, such as the amount of time the data unit spends in an egress queue while the data unit is being buffered by a traffic manager. Queue delay may be used as a substitute for queue size in existing AQM, Weighted Random Early Detection (“WRED”), Tail Drop, Explicit Congestion Notification (“ECN”), reflection, and/or other congestion management or notification algorithms. Or, a congestion score calculated based on the queue delay and one or more other metrics, such as queue size, may be used as a substitute.
    Type: Application
    Filed: October 10, 2023
    Publication date: February 1, 2024
    Inventors: William Brad MATTHEWS, Bruce Hui KWAN, Puneet AGARWAL
  • Patent number: 11888743
    Abstract: Prefix entries are efficiently stored at a networking device for performance of a longest prefix match against the stored entries. A prefix entry generally refers to a data entry which maps a particular prefix to one or more actions to be performed by a networking device with respect to network packets or other data structures associated with a network packet that matches the particular prefix. In the context of a router networking device handling a data packet, the one or more actions may include, for example, forwarding a received network packet to a particular “next hop” networking device in order to progress the network packet towards its final destination, applying firewall rule(s), manipulating the packet, and so forth. To reduce a total amount of space occupied by a prefix tree in storage, each of the nodes of a prefix tree may be configured to store only an incremental portion of a prefix relative to its parent node.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: January 30, 2024
    Assignee: Innovium, Inc.
    Inventors: Puneet Agarwal, Rupa Budhia, Meg Lin
  • Patent number: 11888931
    Abstract: Efficient scaling of in-network compute operations to large numbers of compute nodes is disclosed. Each compute node is connected to a same plurality of network compute nodes, such as compute-enabled network switches. Compute processes at the compute nodes generate local gradients or other vectors by, for instance, performing a forward pass on a neural network. Each vector comprises values for a same set of vector elements. Each network compute node is assigned to, based on the local vectors, reduce vector data for a different a subset of the vector elements. Each network compute node returns a result chunk for the elements it processed back to each of the compute nodes, whereby each compute node receives the full result vector. This configuration may, in some embodiments, reduce buffering, processing, and/or other resource requirements for the network compute node or network at large.
    Type: Grant
    Filed: May 11, 2022
    Date of Patent: January 30, 2024
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Bruce Hui Kwan