Patents by Inventor William Brad MATTHEWS

William Brad MATTHEWS has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250190273
    Abstract: Efficient scaling of in-network compute operations to large numbers of compute nodes is disclosed. Each compute node is connected to a same plurality of network compute nodes, such as compute-enabled network switches. Compute processes at the compute nodes generate local gradients or other vectors by, for instance, performing a forward pass on a neural network. Each vector comprises values for a same set of vector elements. Each network compute node is assigned to, based on the local vectors, reduce vector data for a different a subset of the vector elements. Each network compute node returns a result chunk for the elements it processed back to each of the compute nodes, whereby each compute node receives the full result vector. This configuration may, in some embodiments, reduce buffering, processing, and/or other resource requirements for the network compute node or network at large.
    Type: Application
    Filed: December 11, 2023
    Publication date: June 12, 2025
    Inventors: William Brad MATTHEWS, Puneet AGARWAL, Bruce Hui KWAN
  • Publication number: 20240422104
    Abstract: Packet-switching operations in a network device are managed based on the detection of excessive-rate traffic flows. A network device receives a data unit, determines the traffic flow to which the data unit belongs, and updates flow tracking information for that flow. The network device utilizes the tracking information to determine when a rate at which the network device is receiving data belonging to the flow exceeds an excessive-rate threshold and is thus an excessive-rate flow. The network device may enable one or more excessive-rate policies on an excessive-rate traffic flow. Such a policy may include any number of features that affect how the device handles data units belonging to the flow, such as excessive-rate notification, differentiated discard, differentiated congestion notification, and reprioritization. Memory and other resource optimizations for such flow tracking and management are also described.
    Type: Application
    Filed: September 3, 2024
    Publication date: December 19, 2024
    Inventors: William Brad MATTHEWS, Rupa BUDHIA, Puneet AGARWAL
  • Publication number: 20240340250
    Abstract: Packet metadata for incoming packets are buffered in queue selection buffers associated with a port of a network node. Packet data for outgoing packets are buffered in a port selection buffer associated with the port. At a selection clock cycle, while a port scheduler of the network node selects a subset of the packet data for a subset of the outgoing packets from the port selection buffer, a queue scheduler of the port concurrently selects a subset of the packet metadata for a subset of the incoming packets from the queue selection buffers and adds new packet data for new outgoing packets to the port selection buffer of the port. The new packet data are derived based at least in part on the subset of the packet metadata for the subset of the incoming packets.
    Type: Application
    Filed: July 27, 2023
    Publication date: October 10, 2024
    Inventors: William Brad MATTHEWS, Ashwin ALAPATI
  • Publication number: 20240257262
    Abstract: A pre-scaled accumulated byte count of a port of a network node over a sampling period is scaled with a scaling factor to generate a scaled accumulated byte count. The pre-scaled accumulated byte count represents a total number of bytes in packets transferred by the port. The scaling factor represents a first port-specific attribute of the port and scales a port-specific maximum throughput of the port to a specific maximum port throughput of the network node. An iterative vector encoding method is applied to the scaled accumulated byte count to generate an encoded bit vector comprising bits respectively ordered bit positions. Each set bit of the encoded bit vector represents a respective weighted value of port utilization of the port. The encoded bit vector is stored, at a map location, in an operational statistics map.
    Type: Application
    Filed: July 28, 2023
    Publication date: August 1, 2024
    Inventors: William Brad MATTHEWS, Rupa BUDHIA, Meg Pei LIN
  • Publication number: 20240039852
    Abstract: Approaches, techniques, and mechanisms are disclosed for improving operations of a network switching device and/or network-at-large by utilizing queue delay as a basis for measuring congestion for the purposes of Automated Queue Management (“AQM”) and/or other congestion-based policies. Queue delay is an exact or approximate measure of the amount of time a data unit waits at a network device as a consequence of queuing, such as the amount of time the data unit spends in an egress queue while the data unit is being buffered by a traffic manager. Queue delay may be used as a substitute for queue size in existing AQM, Weighted Random Early Detection (“WRED”), Tail Drop, Explicit Congestion Notification (“ECN”), reflection, and/or other congestion management or notification algorithms. Or, a congestion score calculated based on the queue delay and one or more other metrics, such as queue size, may be used as a substitute.
    Type: Application
    Filed: October 10, 2023
    Publication date: February 1, 2024
    Inventors: William Brad MATTHEWS, Bruce Hui KWAN, Puneet AGARWAL
  • Publication number: 20230269192
    Abstract: Packet-switching operations in a network device are managed based on the detection of excessive-rate traffic flows. A network device receives a data unit, determines the traffic flow to which the data unit belongs, and updates flow tracking information for that flow. The network device utilizes the tracking information to determine when a rate at which the network device is receiving data belonging to the flow exceeds an excessive-rate threshold and is thus an excessive-rate flow. The network device may enable one or more excessive-rate policies on an excessive-rate traffic flow. Such a policy may include any number of features that affect how the device handles data units belonging to the flow, such as excessive-rate notification, differentiated discard, differentiated congestion notification, and reprioritization. Memory and other resource optimizations for such flow tracking and management are also described.
    Type: Application
    Filed: April 28, 2023
    Publication date: August 24, 2023
    Inventors: William Brad MATTHEWS, Rupa BUDHIA, Puneet AGARWAL
  • Publication number: 20150339063
    Abstract: A system and method for efficient buffer management for banked shared memory designs are provided. In one embodiment, a controller within the switch is configured to manage the buffering of the shared memory banks by allocating full address sets to write sources. Each full address set that is allocated to a write source includes a number of memory addresses, wherein each memory address is associated with a different shared memory bank. A size of the full address set can be based on a determined number of buffer access contenders.
    Type: Application
    Filed: August 3, 2015
    Publication date: November 26, 2015
    Inventor: William Brad MATTHEWS
  • Publication number: 20150207739
    Abstract: Aspects of oversubscription monitoring are described. In one embodiment, oversubscription monitoring includes accumulating an amount of data that arrives at a network component over at least one epoch of time. Further, a core processing rate at which data can be processed by the network component is calculated. Based on the amount of data and the core processing rate, it is determined whether the network component is operating in an oversubscribed region of operation. In one embodiment, when the network component is operating in the oversubscribed region of operation, certain quality of service metrics are monitored. Using the monitored metrics, a network operation display object may be generated for identifying or troubleshooting network errors during an oversubscribed region of operation of the network component.
    Type: Application
    Filed: April 1, 2015
    Publication date: July 23, 2015
    Applicant: BROADCOM CORPORATION
    Inventors: William Brad MATTHEWS, Puneet AGARWAL, Bruce Hui KWAN
  • Publication number: 20140254357
    Abstract: Disclosed are various embodiments for facilitating network flows in a networked environment. In various embodiments, a switch transmits data using an egress port that comprises an egress queue. The switch sets a congestion notification threshold for the egress queue. The switch generates a drain rate metric based at least in part on a drain rate for the egress queue, and the congestion notification threshold is adjusted based at least in part on the drain rate metric.
    Type: Application
    Filed: September 30, 2013
    Publication date: September 11, 2014
    Inventors: PUNEET AGARWAL, BRUCE HUI KWAN, WILLIAM BRAD MATTHEWS, VAHID TABATABAEE
  • Publication number: 20140052935
    Abstract: According to one general aspect, a method may include, in one embodiment, grouping a plurality of at least single-ported memory banks together to substantially act as a single at least dual-ported aggregated memory element. In various embodiments, the method may also include controlling read access to the memory banks such that a read operation may occur from any memory bank in which data is stored. In some embodiments, the method may include controlling write access to the memory banks such that a write operation may occur to any memory bank which is not being accessed by a read operation.
    Type: Application
    Filed: August 13, 2013
    Publication date: February 20, 2014
    Applicant: BROADCOM CORPORATION
    Inventor: William Brad MATTHEWS