Patents by Inventor Natarajan Vaidhyanathan

Natarajan Vaidhyanathan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9813420
    Abstract: Access control lists (ACLs) permit network administrators to manage network traffic flowing through a networking element to optimize network security, performance, quality of service (QoS), and the like. If a networking element has multiple ACLs directed towards different types of network optimization, each ACL may return a separate action set that identifies one or more actions the networking element should perform based on a received frame. In some cases, these action sets may conflict. To resolve the conflicts, a networking element may include resolution logic that selects one of the conflicting actions based on a predefined precedence value assigned to each action in an action set. By comparing the different precedence values, the resolution logic generates a new action set based on the actions with the highest precedence value.
    Type: Grant
    Filed: February 18, 2013
    Date of Patent: November 7, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Claude Basso, Natarajan Vaidhyanathan, Colin B. Verrilli
  • Patent number: 9794262
    Abstract: Embodiments presented herein describe techniques for selecting incoming network frames to be mirrored using an access control list. According to one embodiment, an incoming frame is received. Upon determining that the incoming frame matches an entry in the access control list, a mirror field of the entry is evaluated. The mirror field identifies at least one mirroring action to perform on the frame. The identified mirroring action is performed on the frame.
    Type: Grant
    Filed: October 22, 2014
    Date of Patent: October 17, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Claude Basso, Todd A. Greenfield, Joseph A. Kirscht, Natarajan Vaidhyanathan
  • Publication number: 20170286001
    Abstract: Providing memory bandwidth compression using compression indicator (CI) hint directories in a central processing unit (CPU)-based system is disclosed. In this regard, a compressed memory controller provides a CI hint directory comprising a plurality of CI hint directory entries, each providing a plurality of CI hints. The compressed memory controller is configured to receive a memory read request comprising a physical address of a memory line, and initiate a memory read transaction comprising a requested read length value. The compressed memory controller is further configured to, in parallel with initiating the memory read transaction, determine whether the physical address corresponds to a CI hint directory entry in the CI hint directory. If so, the compressed memory controller reads a CI hint from the CI hint directory entry of the CI hint directory, and modifies the requested read length value of the memory read transaction based on the CI hint.
    Type: Application
    Filed: March 31, 2016
    Publication date: October 5, 2017
    Inventors: Colin Beaton Verrilli, Mattheus Cornelis Antonius Adrianus Heddes, Natarajan Vaidhyanathan
  • Publication number: 20170286308
    Abstract: Providing memory bandwidth compression using multiple last-level cache (LLC) lines in a central processing unit (CPU)-based system is disclosed. In some aspects, a compressed memory controller (CMC) provides an LLC comprising multiple LLC lines, each providing a plurality of sub-lines the same size as a system cache line. The contents of the system cache line(s) stored within a single LLC line are compressed and stored in system memory within the memory sub-line region corresponding to the LLC line. A master table stores information indicating how the compressed data for an LLC line is stored in system memory by storing an offset value and a length value for each sub-line within each LLC line. By compressing multiple system cache lines together and storing compressed data in a space normally allocated to multiple uncompressed system lines, the CMC enables compression sizes to be smaller than the memory read/write granularity of the system memory.
    Type: Application
    Filed: March 31, 2016
    Publication date: October 5, 2017
    Inventors: Colin Beaton Verrilli, Mattheus Cornelis Antonius Adrianus Heddes, Mark Anthony Rinaldi, Natarajan Vaidhyanathan
  • Publication number: 20170286214
    Abstract: Providing space-efficient storage for dynamic random access memory (DRAM) cache tags is provided. In one aspect, a DRAM cache management circuit provides a plurality of cache entries, each of which contains a tag storage region, a data storage region, and an error protection region. The DRAM cache management circuit is configured to store data to be cached in the data storage region of each cache entry. The DRAM cache management circuit is also configured to use an error detection code (EDC) instead of an error correcting code (ECC), and to store a tag and the EDC for each cache entry in the error protection region of the cache entry. In this manner, the capacity of a DRAM cache can be increased by avoiding the need for the tag storage region for each cache entry, while still providing error detection for the cache entry.
    Type: Application
    Filed: March 30, 2016
    Publication date: October 5, 2017
    Inventors: Natarajan Vaidhyanathan, Mattheus Cornelis Antonius Adrianus Heddes, Colin Beaton Verrilli
  • Patent number: 9749328
    Abstract: Embodiments presented herein describe techniques for selecting incoming network frames to be mirrored using an access control list. According to one embodiment, an incoming frame is received. Upon determining that the incoming frame matches an entry in the access control list, a mirror field of the entry is evaluated. The mirror field identifies at least one mirroring action to perform on the frame. The identified mirroring action is performed on the frame.
    Type: Grant
    Filed: May 22, 2014
    Date of Patent: August 29, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Claude Basso, Todd A. Greenfield, Joseph A. Kirscht, Natarajan Vaidhyanathan
  • Publication number: 20170242793
    Abstract: Providing scalable dynamic random access memory (DRAM) cache management using DRAM cache indicator caches is provided. In one aspect, a DRAM cache management circuit is provided to manage access to a DRAM cache in high-bandwidth memory. The DRAM cache management circuit comprises a DRAM cache indicator cache, which stores master table entries that are read from a master table in a system memory DRAM and that contain DRAM cache indicators. The DRAM cache indicators enable the DRAM cache management circuit to determine whether a memory line in the system memory DRAM is cached in the DRAM cache of high-bandwidth memory, and, if so, in which way of the DRAM cache the memory line is stored.
    Type: Application
    Filed: August 4, 2016
    Publication date: August 24, 2017
    Inventors: Natarajan Vaidhyanathan, Mattheus Cornelis Antonius Adrianus Heddes, Colin Beaton Verrilli
  • Patent number: 9740621
    Abstract: Memory controllers employing memory capacity and/or bandwidth compression with next read address prefetching, and related processor-based systems and methods are disclosed. In certain aspects, memory controllers are employed that can provide memory capacity compression. In certain aspects disclosed herein, a next read address prefetching scheme can be used by a memory controller to speculatively prefetch data from system memory at another address beyond the currently accessed address. Thus, when memory data is addressed in the compressed memory, if the next read address is stored in metadata associated with the memory block at the accessed address, the memory data at the next read address can be prefetched by the memory controller to be available in case a subsequent read operation issued by a central processing unit (CPU) has been prefetched by the memory controller.
    Type: Grant
    Filed: May 19, 2015
    Date of Patent: August 22, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Mattheus Cornelis Antonius Adrianus Heddes, Natarajan Vaidhyanathan, Colin Beaton Verrilli
  • Patent number: 9722931
    Abstract: Embodiments presented herein describe techniques for isolating multicast and broadcast frames to a traffic class that is separate from a traffic class used for unicast frames. According to one embodiment, a network switch receives an incoming Ethernet virtual local area network (VLAN)-tagged frame. The switch evaluates priority bits of the VLAN tag of the frame. The switch also determines a type of frame (e.g., whether the frame is unicast, broadcast, multicast, or flood). Based on the priority field values and the type of the frame, the switch identifies a mapping of the frame to a particular traffic class. The network switch assigns the frame to the traffic class.
    Type: Grant
    Filed: June 5, 2014
    Date of Patent: August 1, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Claude Basso, Joseph A. Kirscht, Michel Poret, Ethan M. Spiegel, Natarajan Vaidhyanathan
  • Publication number: 20170212840
    Abstract: Providing scalable dynamic random access memory (DRAM) cache management using tag directory caches is provided. In one aspect, a DRAM cache management circuit is provided to manage access to a DRAM cache in a high-bandwidth memory. The DRAM cache management circuit comprises a tag directory cache and a tag directory cache directory. The tag directory cache stores tags of frequently accessed cache lines in the DRAM cache, while the tag directory cache directory stores tags for the tag directory cache. The DRAM cache management circuit uses the tag directory cache and the tag directory cache directory to determine whether data associated with a memory address is cached in the DRAM cache of the high-bandwidth memory. Based on the tag directory cache and the tag directory cache directory, the DRAM cache management circuit may determine whether a memory operation can be performed using the DRAM cache and/or a system memory DRAM.
    Type: Application
    Filed: June 24, 2016
    Publication date: July 27, 2017
    Inventors: Hien Minh Le, Thuong Quang Truong, Natarajan Vaidhyanathan, Mattheus Cornelis Antonius Adrianus Heddes, Colin Beaton Verrilli
  • Patent number: 9571502
    Abstract: Access control lists (ACLs) permit network administrators to manage network traffic flowing through a networking element to optimize network security, performance, quality of service (QoS), and the like. If a networking element has multiple ACLs directed towards different types of network optimization, each ACL may return a separate action set that identifies one or more actions the networking element should perform based on a received frame. In some cases, these action sets may conflict. To resolve the conflicts, a networking element may include resolution logic that selects one of the conflicting actions based on a predefined precedence value assigned to each action in an action set. By comparing the different precedence values, the resolution logic generates a new action set based on the actions with the highest precedence value.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: February 14, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Claude Basso, Natarajan Vaidhyanathan, Colin B. Verrilli
  • Patent number: 9531847
    Abstract: Embodiments presented herein describe techniques for parsing an Internet Protocol version 6 frame and skipping extension headers of the frame. A configurable skip list is provided that specifies extension headers for a networking device to skip when parsing the frame. The networking device examines “next header” fields of each extension header to determine a next extension header in the chain. If the next extension header matches an extension header in the skip list, the networking device iterates to the next header in the chain until the end of the chain (or an extension header that does not contain a match in the list) is reached.
    Type: Grant
    Filed: May 22, 2014
    Date of Patent: December 27, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Claude Basso, Todd A. Greenfield, Michel Poret, Natarajan Vaidhyanathan
  • Patent number: 9516146
    Abstract: Embodiments presented herein describe techniques for parsing an Internet Protocol version 6 frame and skipping extension headers of the frame. A configurable skip list is provided that specifies extension headers for a networking device to skip when parsing the frame. The networking device examines “next header” fields of each extension header to determine a next extension header in the chain. If the next extension header matches an extension header in the skip list, the networking device iterates to the next header in the chain until the end of the chain (or an extension header that does not contain a match in the list) is reached.
    Type: Grant
    Filed: October 22, 2014
    Date of Patent: December 6, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Claude Basso, Todd A. Greenfield, Michel Poret, Natarajan Vaidhyanathan
  • Patent number: 9497119
    Abstract: Embodiments presented herein provide a TCAM-based access control list that supports disjunction operations in rules. According to one embodiment, a numeric range table is tied to the access control list. Each entry in the numeric range table includes an encode field that provides for scanning TCP flags in a TCP header of an incoming Ethernet frame. Further, each entry provides a first mask and a second mask used to test for desired set and unset TCP flags in a given frame. Each entry also provides an operation field that performs a disjunction operation that compares the first mask, the second mask, and set TCP flags in a given frame.
    Type: Grant
    Filed: May 22, 2014
    Date of Patent: November 15, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Claude Basso, Joseph A. Kirscht, Natarajan Vaidhyanathan
  • Patent number: 9485257
    Abstract: Embodiments described herein provide techniques for atomically updating a ternary content addressable memory (TCAM)-based access control list (ACL). According to one embodiment, a current version bit of the ACL is determined. The current version bit indicates that a rule in the ACL is active is the version flag in the rule matches the current version bit. Through these techniques, a first set of rules can be modified to create a second set of rules (e.g., by insertions, deletions, and replacements, etc.).
    Type: Grant
    Filed: May 22, 2014
    Date of Patent: November 1, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Claude Basso, Joseph A. Kirscht, Natarajan Vaidhyanathan, Colin B. Verrilli
  • Patent number: 9473502
    Abstract: Embodiments described herein provide techniques for atomically updating a ternary content addressable memory (TCAM)-based access control list (ACL). According to one embodiment, a current version bit of the ACL is determined. The current version bit indicates that a rule in the ACL is active is the version flag in the rule matches the current version bit. Through these techniques, a first set of rules can be modified to create a second set of rules (e.g., by insertions, deletions, and replacements, etc.).
    Type: Grant
    Filed: October 21, 2014
    Date of Patent: October 18, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Claude Basso, Joseph A. Kirscht, Natarajan Vaidhyanathan, Colin B. Verrilli
  • Patent number: 9438447
    Abstract: Link aggregation is a practice that uses multiple Ethernet links between two end points in order to obtain higher bandwidth and resiliency than possible with a single link. A flow distribution technique is provided to distribute traffic between the two end points equally across all links in the group and achieve greater efficiency. The flow distribution technique generates and sub-divides a hash value based on received packet flow. The divided portions of the hash value are used in a hierarchical fashion to select a link to use for this packet.
    Type: Grant
    Filed: December 18, 2012
    Date of Patent: September 6, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Claude Basso, Natarajan Vaidhyanathan, Colin B. Verrilli, Bruce M. Walk, Daniel Wind
  • Publication number: 20160224241
    Abstract: Providing memory bandwidth compression using back-to-back read operations by compressed memory controllers (CMCs) in a central processing unit (CPU)-based system is disclosed. In this regard, in some aspects, a CMC is configured to receive a memory read request to a physical address in a system memory, and read a compression indicator (CI) for the physical address from error correcting code (ECC) bits of a first memory block in a memory line associated with the physical address. Based on the CI, the CMC determines whether the first memory block comprises compressed data. If not, the CMC performs a back-to-back read of one or more additional memory blocks of the memory line in parallel with returning the first memory block. Some aspects may further improve memory access latency by writing compressed data to each of a plurality of memory blocks of the memory line, rather than only to the first memory block.
    Type: Application
    Filed: September 3, 2015
    Publication date: August 4, 2016
    Inventors: Colin Beaton Verrilli, Mattheus Cornelis Antonius Adrianus Heddes, Brian Joel Schuh, Michael Raymond Trombley, Natarajan Vaidhyanathan
  • Patent number: 9306848
    Abstract: Access control lists (ACLs) include one or more rules that each define a condition and one or more actions to be performed if the condition is satisfied. In one embodiment, the conditions are stored on a ternary content-addressable memory (TCAM), which receives a portion of network traffic, such as a frame header, and compares different portions of the header to entries in the TCAM. If the frame header satisfies the condition, the TCAM reports the match to other elements in the ACL. For certain conditions, the TCAM may divide the condition into a plurality of sub-conditions which are each stored in a row of the TCAM. To efficiently use the limited space in TCAM, the networking element may include one or more comparator units which check for special-case conditions. The comparator units may be used in lieu of the TCAM to determine whether the condition is satisfied.
    Type: Grant
    Filed: February 18, 2013
    Date of Patent: April 5, 2016
    Assignee: International Business Machines Corporation
    Inventors: Claude Basso, Natarajan Vaidhyanathan, Colin B. Verrilli
  • Patent number: 9231781
    Abstract: Link aggregation is a practice that uses multiple Ethernet links between two end points in order to obtain higher bandwidth and resiliency than possible with a single link. A flow distribution technique is provided to distribute traffic between the two end points equally across all links in the group and achieve greater efficiency. The flow distribution technique generates and sub-divides a hash value based on received packet flow. The divided portions of the hash value are used in a hierarchical fashion to select a link to use for this packet.
    Type: Grant
    Filed: January 11, 2013
    Date of Patent: January 5, 2016
    Assignee: International Business Machines Corporation
    Inventors: Claude Basso, Natarajan Vaidhyanathan, Colin B. Verrilli, Bruce M. Walk, Daniel Wind