Patents by Inventor Srinivas Vaduvatha

Srinivas Vaduvatha has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12363204
    Abstract: Aspects of the disclosure are directed to a high performance connection scheduler for reliable transport protocols in data center networking. The connection scheduler can handle enqueue events, dequeue events, and update events. The connection scheduler can include a connection queue, scheduling queue, and quality of service arbiter to support scheduling a large number of connections at a high rate.
    Type: Grant
    Filed: March 29, 2024
    Date of Patent: July 15, 2025
    Assignee: Google LLC
    Inventors: Abhishek Agarwal, Weihuang Wang, Weiwei Jiang, Srinivas Vaduvatha, Jiazhen Zheng
  • Patent number: 12216587
    Abstract: A packet cache system includes a cache memory allocator for receiving a memory address corresponding to a non-cache memory and allocated to a packet, and associating the memory address with a cache memory address; a hash table for storing the memory address and the cache memory address, with the memory address as a key and the cache memory address as a value; a cache memory for storing the packet at a location indicated by the cache memory address; and an eviction engine for determining one or more cached packets to remove from the cache memory and place in the non-cache memory when occupancy of the cache memory is high.
    Type: Grant
    Filed: February 21, 2024
    Date of Patent: February 4, 2025
    Assignee: Google LLC
    Inventors: Jiazhen Zheng, Srinivas Vaduvatha, Hugh McEvoy Walsh, Prashant R. Chandra, Abhishek Agarwal, Weihuang Wang, Weiwei Jiang
  • Publication number: 20250016100
    Abstract: A custom processor core is provided, wherein the custom processor core may be used for congestion control in reliable transport protocols. The hardware architecture of the custom processor core allows for custom instructions, special register sets, and datapath enhancements for accelerating congestion control algorithms to achieve higher performance.
    Type: Application
    Filed: July 3, 2024
    Publication date: January 9, 2025
    Inventors: Srinivas Vaduvatha, Hassan Mohamed Gamal Hassan Wassel, Ye Tang, Sarin Thomas, Rakesh Gautam, Prashant Chandra, Anupam Jain
  • Patent number: 12184417
    Abstract: The technology is directed to the use of a bitmap generated at a receiver to track the status of received packets sent by a transmitter. The technology may include a network device including an input port, output port, and circuitry. The circuitry may generate a transmitter bitmap that tracks each data packet sent to another network device. The circuitry of the network device may receive, from the other network device, a receiver bitmap that identifies each data packet that is received and not received from the network device. The circuitry may then determine which data packets to retransmit by comparing the transmitter bitmap to the receiver bitmap.
    Type: Grant
    Filed: August 3, 2022
    Date of Patent: December 31, 2024
    Assignee: Google LLC
    Inventors: Yuliang Li, Hassan Mohamed Gamal Hassan Wassel, Behnam Montazeri, Weihuang Wang, Srinivas Vaduvatha, Nandita Dukkipati, Prashant R. Chandra, Masoud Moshref Javadi
  • Patent number: 12164439
    Abstract: Aspects of the disclosure are directed to a packet cache eviction engine for reliable transport protocols of a network. The packet cache eviction engine can manage on-chip cache occupancy by evicting lower priority packets to off-chip memory and ensuring that higher priority packets are kept on-chip to achieve higher performance and lower latency in processing packets in the network.
    Type: Grant
    Filed: June 2, 2023
    Date of Patent: December 10, 2024
    Assignee: Google LLC
    Inventors: Chandan Muddamsetty, Jiazhen Zheng, Weiwei Jiang, Shivang Ghetia, Abhishek Agarwal, Srinivas Vaduvatha
  • Publication number: 20240403228
    Abstract: Aspects of the disclosure are directed to a packet cache eviction engine for reliable transport protocols of a network. The packet cache eviction engine can manage on-chip cache occupancy by evicting lower priority packets to off-chip memory and ensuring that higher priority packets are kept on-chip to achieve higher performance and lower latency in processing packets in the network.
    Type: Application
    Filed: June 2, 2023
    Publication date: December 5, 2024
    Inventors: Chandan Muddamsetty, Jiazhen Zheng, Weiwei Jiang, Shivang Ghetia, Abhishek Agarwal, Srinivas Vaduvatha
  • Patent number: 12132802
    Abstract: An application specific integrated circuit (ASIC) is provided for reliable transport of packets. The network interface card may include a reliable transport accelerator (RTA). The RTA may include a cache lookup database. The RTA may be configured to determine, from a received data packet, a connection identifier and query the cache lookup database for a cache entry corresponding to a connection context having the connection identifier. In response to the query, the RTA may receive a cache hit or a cache miss.
    Type: Grant
    Filed: December 16, 2021
    Date of Patent: October 29, 2024
    Assignee: Google LLC
    Inventors: Weihuang Wang, Srinivas Vaduvatha, Xiaoming Wang, Gurushankar Rajamani, Abhishek Agarwal, Jiazhen Zheng, Prashant Chandra
  • Patent number: 12132800
    Abstract: A communication technology that provides for handling of failed packet transmissions to reduce retransmission attempts and uses resynchronization to prevent tearing down of connections. Thereby, providing for more resilient connections. In an implementation, an initiator entity may determine that a negative acknowledgment indicates that an operation for a particular packet is completed in error by a target entity, and transmit to the target entity a resynchronization packet without tearing down the connection.
    Type: Grant
    Filed: September 13, 2023
    Date of Patent: October 29, 2024
    Assignee: Google LLC
    Inventors: Weihuang Wang, Prashant Chandra, Srinivas Vaduvatha
  • Patent number: 12040988
    Abstract: A communication protocol system is provided for reliable transport of packets. A content addressable memory hardware architecture including an acknowledgment coalescing module in communication with a content addressable memory (CAM). The acknowledgment coalescing module coalesces multiple acknowledgement packets as a single acknowledgement packet to reduce the overall numbers of the packet transmission in the communication protocol system. In addition, the acknowledgment coalescing module may also provide a piggyback mechanism to carry acknowledge information in a regular data packet. Thus, the need to generate a new acknowledgement packet may be eliminated. Accordingly, the network congestion and latency may be reduced, and the communication and transmission efficiency are enhanced.
    Type: Grant
    Filed: December 16, 2021
    Date of Patent: July 16, 2024
    Assignee: Google LLC
    Inventors: Srinivas Vaduvatha, Weihuang Wang, Jiazhen Zheng, Prashant Chandra
  • Patent number: 12019542
    Abstract: Aspects of the disclosure are directed to high performance connection cache eviction for reliable transport protocols in data center networking. Connection priorities for connection entries are determined to store the connection entries in a cache based on their connection priority. During cache eviction, the connection entries with a lowest connection priority are evicted from the cache. Cache eviction can be achieved with low latency at a high rate.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: June 25, 2024
    Assignee: Google LLC
    Inventors: Abhishek Agarwal, Jiazhen Zheng, Srinivas Vaduvatha, Weihuang Wang, Hugh McEvoy Walsh, Weiwei Jiang, Ajay Venkatesan, Prashant R. Chandra
  • Publication number: 20240193093
    Abstract: A packet cache system includes a cache memory allocator for receiving a memory address corresponding to a non-cache memory and allocated to a packet, and associating the memory address with a cache memory address; a hash table for storing the memory address and the cache memory address, with the memory address as a key and the cache memory address as a value; a cache memory for storing the packet at a location indicated by the cache memory address; and an eviction engine for determining one or more cached packets to remove from the cache memory and place in the non-cache memory when occupancy of the cache memory is high.
    Type: Application
    Filed: February 21, 2024
    Publication date: June 13, 2024
    Inventors: Jiazhen Zheng, Srinivas Vaduvatha, Hugh McEvoy Walsh, Prashant R. Chandra, Abhishek Agarwal, Weihuang Wang, Weiwei Jiang
  • Patent number: 11995000
    Abstract: A packet cache system includes a cache memory allocator for receiving a memory address corresponding to a non-cache memory and allocated to a packet, and associating the memory address with a cache memory address; a hash table for storing the memory address and the cache memory address, with the memory address as a key and the cache memory address as a value; a cache memory for storing the packet at a location indicated by the cache memory address; and an eviction engine for determining one or more cached packets to remove from the cache memory and place in the non-cache memory when occupancy of the cache memory is high.
    Type: Grant
    Filed: June 7, 2022
    Date of Patent: May 28, 2024
    Assignee: Google LLC
    Inventors: Jiazhen Zheng, Srinivas Vaduvatha, Hugh McEvoy Walsh, Prashant R. Chandra, Abhishek Agarwal, Weihuang Wang, Weiwei Jiang
  • Publication number: 20240168996
    Abstract: A hash table system, including a plurality of hash tables, associated with respective hash functions, for storing key-value pairs; an overflow memory for storing key-value pairs moved from the hash tables due to collision; and an arbiter for arbitrating among commands including update commands, match commands, and rehash commands, wherein for each system clock cycle, the arbiter selects as a selected command one of an update command, a match command, or a rehash command, and wherein the hash table system completes execution of each selected command within a bounded number of system clock cycles.
    Type: Application
    Filed: January 26, 2024
    Publication date: May 23, 2024
    Inventors: Weiwei Jiang, Srinivas Vaduvatha, Prashant R. Chandra, Jiazhen Zheng, Hugh McEvoy Walsh, Weihuang Wang, Abhishek Agarwal
  • Patent number: 11979476
    Abstract: Aspects of the disclosure are directed to a high performance connection scheduler for reliable transport protocols in data center networking. The connection scheduler can handle enqueue events, dequeue events, and update events. The connection scheduler can include a connection queue, scheduling queue, and quality of service arbiter to support scheduling a large number of connections at a high rate.
    Type: Grant
    Filed: October 7, 2022
    Date of Patent: May 7, 2024
    Assignee: Google LLC
    Inventors: Abhishek Agarwal, Weihuang Wang, Weiwei Jiang, Srinivas Vaduvatha, Jiazhen Zheng
  • Publication number: 20240121320
    Abstract: Aspects of the disclosure are directed to a high performance connection scheduler for reliable transport protocols in data center networking. The connection scheduler can handle enqueue events, dequeue events, and update events. The connection scheduler can include a connection queue, scheduling queue, and quality of service arbiter to support scheduling a large number of connections at a high rate.
    Type: Application
    Filed: October 7, 2022
    Publication date: April 11, 2024
    Inventors: Abhishek Agarwal, Weihuang Wang, Weiwei Jiang, Srinivas Vaduvatha, Jiazhen Zheng
  • Publication number: 20240111667
    Abstract: Aspects of the disclosure are directed to a memory allocator for assigning contiguous memory space for data packets in on-chip memory of a network interface card. The memory allocator includes a plurality of sub-allocators that correspond to a structure of entries, where each entry represents a quanta of memory allocation. The sub-allocators are organized in decreasing size in the memory allocator based on the amount of memory quanta they can allocate.
    Type: Application
    Filed: September 28, 2022
    Publication date: April 4, 2024
    Inventors: Abhishek Agarwal, Srinivas Vaduvatha, Weiwei Jiang, Hugh McEvoy Walsh, Weihuang Wang, Jiazhen Zheng, Ajay Venkatesan
  • Patent number: 11914647
    Abstract: A hash table system, including a plurality of hash tables, associated with respective hash functions, for storing key-value pairs; an overflow memory for storing key-value pairs moved from the hash tables due to collision; and an arbiter for arbitrating among commands including update commands, match commands, and rehash commands, wherein for each system clock cycle, the arbiter selects as a selected command one of an update command, a match command, or a rehash command, and wherein the hash table system completes execution of each selected command within a bounded number of system clock cycles.
    Type: Grant
    Filed: June 6, 2022
    Date of Patent: February 27, 2024
    Assignee: Google LLC
    Inventors: Weiwei Jiang, Srinivas Vaduvatha, Prashant R. Chandra, Jiazhen Zheng, Hugh McEvoy Walsh, Weihuang Wang, Abhishek Agarwal
  • Publication number: 20240064215
    Abstract: Compressing connection state information for a network connection including receiving an input bitmap having a sequence of bits describing transmit states and receive states; partitioning the input bitmap into a plurality of equal size blocks; partitioning each of the blocks into a plurality of equal sized sectors; generating a block valid sequence indicating the blocks having at least one bit set; generating, for each block having at least one bit set, a sector information sequence, the sector information sequence indicating, for the corresponding block, the sectors that have at least one bit set and an encoding type for each sector; and generating one or more symbols by encoding each sector that has at least one bit set.
    Type: Application
    Filed: May 22, 2023
    Publication date: February 22, 2024
    Inventors: Srinivas Vaduvatha, Weiwei Jiang, Prashant Chandra, Opeoluwa Oladipo, Jiazhen Zheng, Hugh McEvoy Walsh, Weihuang Wang, Abhishek Agarwal
  • Publication number: 20240045800
    Abstract: Aspects of the disclosure are directed to high performance connection cache eviction for reliable transport protocols in data center networking. Connection priorities for connection entries are determined to store the connection entries in a cache based on their connection priority. During cache eviction, the connection entries with a lowest connection priority are evicted from the cache. Cache eviction can be achieved with low latency at a high rate.
    Type: Application
    Filed: August 8, 2022
    Publication date: February 8, 2024
    Inventors: Abhishek Agarwal, Jiazhen Zheng, Srinivas Vaduvatha, Weihuang Wang, Hugh McEvoy Walsh, Weiwei Jiang, Ajay Venkatesan, Prashant R. Chandra
  • Publication number: 20240048277
    Abstract: The technology is directed to the use of a bitmap generated at a receiver to track the status of received packets sent by a transmitter. The technology may include a network device including an input port, output port, and circuitry. The circuitry may generate a transmitter bitmap that tracks each data packet sent to another network device. The circuitry of the network device may receive, from the other network device, a receiver bitmap that identifies each data packet that is received and not received from the network device. The circuitry may then determine which data packets to retransmit by comparing the transmitter bitmap to the receiver bitmap.
    Type: Application
    Filed: August 3, 2022
    Publication date: February 8, 2024
    Inventors: Yuliang Li, Hassan Mohamed Gamal Hassan Wassel, Behnam Montazeri, Weihuang Wang, Srinivas Vaduvatha, Nandita Dukkipati, Prashant R. Chandra, Masoud Moshref Javadi