Patents by Inventor Srinivasa Rangan
Srinivasa Rangan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12039169Abstract: A memory controller may include a dynamic arbitration scheme to dynamically vary arbitration factors of two or more traffic classes based on dynamic latency tolerance, requested and available bandwidths on an interconnect from source agents to memory controllers, and other dynamic and static factors.Type: GrantFiled: August 31, 2022Date of Patent: July 16, 2024Assignee: Apple Inc.Inventors: Anjana Subramanian, Rohit Natarajan, Yu Simon Zhang, Mukul A. Joshi, Harshavardhan Kaushikkar, Jeonghee Shin, Srinivasa Rangan Sridharan
-
Patent number: 12007982Abstract: A system for identifying and deleting records of hosts includes a local data manager. The local data manager identifies a discovery event associated with the host and a record type of record types, obtains, in response to identifying, all previously discovered records on the host associated with the record type from a host record repository, obtains discovered records associated with the record type and associated with the host, after obtaining all discovered records and previously discovered records, selects a previously discovered record of the previously discovered records, makes a determination that the previously discovered record does not match any discovered records of the discovered records and is not tagged with a soft delete tag or a hard delete tag, and in response to the determination, tags the previously discovered record with a soft delete tag, and notifies a data manager of modifications to the previously discovered records.Type: GrantFiled: January 27, 2021Date of Patent: June 11, 2024Assignee: EMC IP HOLDING COMPANY LLCInventors: Asif Khan, Kenneth William Owens, Adrian Dobrean, Aneesh Kumar Gurindapalli, Vipin Kumar Kaushal, Yasemin Ugur-Ozekinci, Shelesh Chopra, Gowtham Krishna Iyengar Srinivasa Rangan
-
Publication number: 20240168887Abstract: A cache may store critical cache lines and non-critical cache lines, and may attempt to retain critical cache lines in the cache by, for example, favoring the critical cache lines in replacement data updates. Multiple levels of criticality may be available for a given cache line and cache circuitry may adjust the criticality value of in response to a criticality event. One or more upper criticality levels may be masked when selecting a victim cache line for replacement.Type: ApplicationFiled: January 25, 2024Publication date: May 23, 2024Inventors: Tyler J. Huberty, Vivek Venkatraman, Sandeep Gupta, Eric J. Furbish, Srinivasa Rangan Sridharan, Stephen G. Meier
-
Publication number: 20240126457Abstract: An apparatus includes a cache controller circuit and a cache memory circuit that further includes cache memory having a plurality of cache lines. The cache controller circuit may be configured to receive a request to reallocate a portion of the cache memory circuit that is currently in use. This request may identify an address region corresponding to one or more of the cache lines. The cache controller circuit may be further configured, in response to the request, to convert the one or more cache lines to directly-addressable, random-access memory (RAM) by excluding the one or more cache lines from cache operations.Type: ApplicationFiled: December 11, 2023Publication date: April 18, 2024Inventors: Rohit Natarajan, Jurgen M. Schulz, Christopher D. Shuler, Rohit K. Gupta, Thomas T. Zou, Srinivasa Rangan Sridharan
-
Patent number: 11941428Abstract: Techniques are disclosed relating to an I/O agent circuit. The I/O agent circuit may include one or more queues and a transaction pipeline. The I/O agent circuit may issue, to the transaction pipeline from a queue of the one or more queues, a transaction of a series of transactions enqueued in a particular order. The I/O agent circuit may generate, at the transaction pipeline, a determination to return the transaction to the queue based on a detection of one or more conditions being satisfied. Based on the determination, the I/O agent circuit may reject, at the transaction pipeline, up to a threshold number of transactions that issued from the queue after the transaction issued. The I/O agent circuit may insert the transaction at a head of the queue such that the transaction is enqueued at the queue sequentially first for the series of transactions according to the particular order.Type: GrantFiled: March 31, 2022Date of Patent: March 26, 2024Assignee: Apple Inc.Inventors: Sagi Lahav, Lital Levy-Rubin, Gaurav Garg, Gerard R. Williams, III, Samer Nassar, Per H. Hammarlund, Harshavardhan Kaushikkar, Srinivasa Rangan Sridharan, Jeff Gonion
-
Patent number: 11921640Abstract: A cache may store critical cache lines and non-critical cache lines, and may attempt to retain critical cache lines in the cache by, for example, favoring the critical cache lines in replacement data updates, retaining the critical cache lines with a certain probability when victim cache blocks are being selected, etc. Criticality values may be retained at various levels of the cache hierarchy. Additionally, accelerated eviction may be employed if the threads previously accessing the critical cache blocks are viewed as dead.Type: GrantFiled: April 22, 2022Date of Patent: March 5, 2024Assignee: Apple Inc.Inventors: Tyler J. Huberty, Vivek Venkatraman, Sandeep Gupta, Eric J. Furbish, Srinivasa Rangan Sridharan, Stephan G. Meier
-
Patent number: 11893251Abstract: A non-transitory computer-readable medium is disclosed, the medium having instructions stored thereon that are executable by a computer system to perform operations that may include allocating a plurality of storage locations in a system memory of the computer system to a buffer. The operations may further include selecting a particular order for allocating the plurality of storage locations into a cache memory circuit. This particular order may increase a uniformity of cache miss rates in comparison to a linear order. The operations may also include caching subsets of the plurality of storage locations of the buffer using the particular order.Type: GrantFiled: August 31, 2021Date of Patent: February 6, 2024Assignee: Apple Inc.Inventors: Rohit Natarajan, Jurgen M. Schulz, Christopher D. Shuler, Rohit K. Gupta, Thomas T. Zou, Srinivasa Rangan Sridharan
-
Patent number: 11822480Abstract: A cache may store critical cache lines and non-critical cache lines, and may attempt to retain critical cache lines in the cache by, for example, favoring the critical cache lines in replacement data updates, retaining the critical cache lines with a certain probability when victim cache blocks are being selected, etc. Criticality values may be retained at various levels of the cache hierarchy. Additionally, accelerated eviction may be employed if the threads previously accessing the critical cache blocks are viewed as dead.Type: GrantFiled: April 22, 2022Date of Patent: November 21, 2023Assignee: Apple Inc.Inventors: Tyler J. Huberty, Vivek Venkatraman, Sandeep Gupta, Eric J. Furbish, Srinivasa Rangan Sridharan, Stephan G. Meier
-
Patent number: 11803471Abstract: An integrated circuit (IC) including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture. For example, the IC may include an interconnect fabric configured to provide communication between the one or more memory controller circuits and the processor cores, graphics processing units, and peripheral devices; and an off-chip interconnect coupled to the interconnect fabric and configured to couple the interconnect fabric to a corresponding interconnect fabric on another instance of the integrated circuit, wherein the interconnect fabric and the off-chip interconnect provide an interface that transparently connects the one or more memory controller circuits, the processor cores, graphics processing units, and peripheral devices in either a single instance of the integrated circuit or two or more instances of the integrated circuit.Type: GrantFiled: August 22, 2022Date of Patent: October 31, 2023Assignee: Apple Inc.Inventors: Per H. Hammarlund, Lior Zimet, Sergio Kolor, Sagi Lahav, James Vash, Gaurav Garg, Tal Kuzi, Jeffry E. Gonion, Charles E. Tucker, Lital Levy-Rubin, Dany Davidov, Steven Fishwick, Nir Leshem, Mark Pilip, Gerard R. Williams, III, Harshavardhan Kaushikkar, Srinivasa Rangan Sridharan
-
Publication number: 20230325086Abstract: A memory controller may include a dynamic arbitration scheme to dynamically vary arbitration factors of two or more traffic classes based on dynamic latency tolerance, requested and available bandwidths on an interconnect from source agents to memory controllers, and other dynamic and static factors.Type: ApplicationFiled: August 31, 2022Publication date: October 12, 2023Inventors: Anjana Subramanian, Rohit Natarajan, Yu Simon Zhang, Mukul A. Joshi, Harshavardhan Kaushikkar, Jeonghee Shin, Srinivasa Rangan Sridharan
-
Patent number: 11704245Abstract: An apparatus includes a cache controller circuit and a cache memory circuit that further includes cache memory having a plurality of cache lines. The cache controller circuit may be configured to receive a request to reallocate a portion of the cache memory circuit that is currently in use. This request may identify an address region corresponding to one or more of the cache lines. The cache controller circuit may be further configured, in response to the request, to convert the one or more cache lines to directly-addressable, random-access memory (RAM) by excluding the one or more cache lines from cache operations.Type: GrantFiled: August 31, 2021Date of Patent: July 18, 2023Assignee: Apple Inc.Inventors: Rohit Natarajan, Jurgen M. Schulz, Christopher D. Shuler, Rohit K. Gupta, Thomas T. Zou, Srinivasa Rangan Sridharan
-
Publication number: 20230064526Abstract: Techniques are disclosed relating to an I/O agent circuit. The I/O agent circuit may include one or more queues and a transaction pipeline. The I/O agent circuit may issue, to the transaction pipeline from a queue of the one or more queues, a transaction of a series of transactions enqueued in a particular order. The I/O agent circuit may generate, at the transaction pipeline, a determination to return the transaction to the queue based on a detection of one or more conditions being satisfied. Based on the determination, the I/O agent circuit may reject, at the transaction pipeline, up to a threshold number of transactions that issued from the queue after the transaction issued. The I/O agent circuit may insert the transaction at a head of the queue such that the transaction is enqueued at the queue sequentially first for the series of transactions according to the particular order.Type: ApplicationFiled: March 31, 2022Publication date: March 2, 2023Inventors: Sagi Lahav, Lital Levy - Rubin, Gaurav Garg, Gerard R. Williams, III, Samer Nassar, Per H. Hammarlund, Harshavardhan Kaushikkar, Srinivasa Rangan Sridharan, Jeff Gonion
-
Publication number: 20230063676Abstract: Techniques are disclosed relating to an I/O agent circuit. The I/O agent circuit may include a transaction pipeline and a pool of counters. The I/O agent circuit may initialize a first counter included in the pool of counters with an initial counter value. The I/O agent circuit may assign the first counter to a specific transaction type. The I/O agent circuit may increment the first counter as a part of allocating a transaction of a transaction type included in a set of transaction types different than the specific transaction type. Based on receiving a transaction request to process a first transaction of the specific transaction type, the I/O agent circuit may bind the first transaction to the first counter. The I/O agent circuit may issue the first transaction to the transaction pipeline based on a counter value stored by the first counter matching the initial counter value.Type: ApplicationFiled: March 31, 2022Publication date: March 2, 2023Inventors: Sagi Lahav, Lital Levy - Rubin, Gaurav Garg, Gerard R. Williams, III, Samer Nassar, Per H. Hammarlund, Harshavardhan Kaushikkar, Srinivasa Rangan Sridharan, Jeff Gonion
-
Publication number: 20230066236Abstract: A cache may store critical cache lines and non-critical cache lines, and may attempt to retain critical cache lines in the cache by, for example, favoring the critical cache lines in replacement data updates, retaining the critical cache lines with a certain probability when victim cache blocks are being selected, etc. Criticality values may be retained at various levels of the cache hierarchy. Additionally, accelerated eviction may be employed if the threads previously accessing the critical cache blocks are viewed as dead.Type: ApplicationFiled: April 22, 2022Publication date: March 2, 2023Inventors: Tyler J. Huberty, Vivek Venkatraman, Sandeep Gupta, Eric J. Furbish, Srinivasa Rangan Sridharan, Stephan G. Meier
-
Publication number: 20230067307Abstract: An apparatus includes a cache controller circuit and a cache memory circuit that further includes cache memory having a plurality of cache lines. The cache controller circuit may be configured to receive a request to reallocate a portion of the cache memory circuit that is currently in use. This request may identify an address region corresponding to one or more of the cache lines. The cache controller circuit may be further configured, in response to the request, to convert the one or more cache lines to directly-addressable, random-access memory (RAM) by excluding the one or more cache lines from cache operations.Type: ApplicationFiled: August 31, 2021Publication date: March 2, 2023Inventors: Rohit Natarajan, Jurgen M. Schulz, Christopher D. Shuler, Rohit K. Gupta, Thomas T. Zou, Srinivasa Rangan Sridharan
-
Publication number: 20230062917Abstract: A non-transitory computer-readable medium is disclosed, the medium having instructions stored thereon that are executable by a computer system to perform operations that may include allocating a plurality of storage locations in a system memory of the computer system to a buffer. The operations may further include selecting a particular order for allocating the plurality of storage locations into a cache memory circuit. This particular order may increase a uniformity of cache miss rates in comparison to a linear order. The operations may also include caching subsets of the plurality of storage locations of the buffer using the particular order.Type: ApplicationFiled: August 31, 2021Publication date: March 2, 2023Inventors: Rohit Natarajan, Jurgen M. Schulz, Christopher D. Shuler, Rohit K. Gupta, Thomas T. Zou, Srinivasa Rangan Sridharan
-
Publication number: 20230060225Abstract: A cache may store critical cache lines and non-critical cache lines, and may attempt to retain critical cache lines in the cache by, for example, favoring the critical cache lines in replacement data updates, retaining the critical cache lines with a certain probability when victim cache blocks are being selected, etc. Criticality values may be retained at various levels of the cache hierarchy. Additionally, accelerated eviction may be employed if the threads previously accessing the critical cache blocks are viewed as dead.Type: ApplicationFiled: April 22, 2022Publication date: March 2, 2023Inventors: Tyler J. Huberty, Vivek Venkatraman, Sandeep Gupta, Eric J. Furbish, Srinivasa Rangan Sridharan, Stephan G. Meier
-
Patent number: 11550716Abstract: Techniques are disclosed relating to an I/O agent circuit of a computer system. The I/O agent circuit may receive, from a peripheral component, a set of transaction requests to perform a set of read transactions that are directed to one or more of a plurality of cache lines. The I/O agent circuit may issue, to a first memory controller circuit configured to manage access to a first one of the plurality of cache lines, a request for exclusive read ownership of the first cache line such that data of the first cache line is not cached outside of the memory and the I/O agent circuit in a valid state. The I/O agent circuit may receive exclusive read ownership of the first cache line, including receiving the data of the first cache line. The I/O agent circuit may then perform the set of read transactions with respect to the data.Type: GrantFiled: January 14, 2022Date of Patent: January 10, 2023Assignee: Apple Inc.Inventors: Gaurav Garg, Sagi Lahav, Lital Levy-Rubin, Gerard Williams, III, Samer Nassar, Per H. Hammarlund, Harshavardhan Kaushikkar, Srinivasa Rangan Sridharan, Jeff Gonion, James Vash
-
Patent number: 11537538Abstract: In one embodiment, a cache coherent system includes one or more agents (e.g., coherent agents) that may cache data used by the system. The system may include a point of coherency in a memory controller in the system, and thus the agents may transmit read requests to the memory controller to coherently read data. The point of coherency may determine if the data is cached in another agent, and may transmit a copy back request to the other agent if the other agent has modified the data. The system may include an interconnect between the agents and the memory controller. At a point on the interconnect at which traffic from the agents converges, a copy back response may be converted to a fill for the requesting agent.Type: GrantFiled: April 27, 2021Date of Patent: December 27, 2022Assignee: Apple Inc.Inventors: Harshavardhan Kaushikkar, Christopher D. Shuler, Srinivasa Rangan Sridharan, Yu Zhang, Kaushik Kannan, Deniz Balkan
-
Publication number: 20220318136Abstract: Techniques are disclosed relating to an I/O agent circuit of a computer system. The I/O agent circuit may receive, from a peripheral component, a set of transaction requests to perform a set of read transactions that are directed to one or more of a plurality of cache lines. The I/O agent circuit may issue, to a first memory controller circuit configured to manage access to a first one of the plurality of cache lines, a request for exclusive read ownership of the first cache line such that data of the first cache line is not cached outside of the memory and the I/O agent circuit in a valid state. The I/O agent circuit may receive exclusive read ownership of the first cache line, including receiving the data of the first cache line. The I/O agent circuit may then perform the set of read transactions with respect to the data.Type: ApplicationFiled: January 14, 2022Publication date: October 6, 2022Inventors: Gaurav Garg, Sagi Lahav, Lital Levy - Rubin, Gerard Williams, III, Samer Nassar, Per H. Hammarlund, Harshavardhan Kaushikkar, Srinivasa Rangan Sridharan, Jeff Gonion, James Vash