Patents by Inventor Gurunath RAMAGIRI
Gurunath RAMAGIRI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12174753Abstract: Aspects of the present disclosure relate to an apparatus comprising processing circuitry, first cache circuitry and second cache circuitry, wherein the second cache circuitry has an access latency higher than an access latency of the first cache circuitry. The second cache circuitry is responsive to receiving a request for data stored within the second cache circuitry to identify said data as pseudo-invalid data and provide said data to the first cache circuitry. The second cache circuitry is responsive to receiving an eviction indication, indicating that the first cache circuitry is to evict said data, to, responsive to determining that said data has not been modified since said data was provided to the first cache circuitry, identify said pseudo-invalid data as valid data.Type: GrantFiled: November 18, 2021Date of Patent: December 24, 2024Assignee: Arm LimitedInventors: Joseph Michael Pusdesris, Klas Magnus Bruce, Jamshed Jalal, Dimitrios Kaseridis, Gurunath Ramagiri, Ho-Seop Kim, Andrew John Turner, Rania Hussein Hassan Mameesh
-
Publication number: 20240273026Abstract: A data processing apparatus includes one or more cache configuration data stores, a coherence manager, and a shared cache. The coherence manager is configured to track and maintain coherency of cache lines accessed by local caching agents and one or more remote caching agents. The cache lines include local cache lines accessed from a local memory region and remote cache lines accessed from a remote memory region. The shared cache is configured to store local cache lines in a first partition and to store remote cache lines in a second partition. The sizes of the first and second partitions are determined based on values in the one or more cache configuration data stores and may or not overlap. The cache configuration data stores may be programmable by a user or dynamically programmed in response to local memory and remote memory access patterns.Type: ApplicationFiled: February 14, 2023Publication date: August 15, 2024Applicant: Arm LimitedInventors: Devi Sravanthi Yalamarthy, Jamshed Jalal, Mark David Werkheiser, Wenxuan Zhang, Ritukar Khanna, Rajani Pai, Gurunath Ramagiri, Mukesh Patel, Tushar P Ringe
-
Publication number: 20240273025Abstract: A super home node of a first chip of a multi-chip data processing system manages coherence for both local and remote cache lines accessed by local caching agents and local cache lines accessed by caching agents of one or more second chips. Both local and remote cache lines are stored in a shared cache, and requests are stored in shared point-of-coherency queue. An entry in a snoop filter table of the super home node includes a presence vector that indicates the presence of a remote cache line at specific caching agents of the first chip or the presence of a local cache line at specific caching agents of the first chip and any caching agent of the second chip. All caching agents of the second chip are represented as a single caching agent in the presence vector.Type: ApplicationFiled: February 14, 2023Publication date: August 15, 2024Applicant: Arm LimitedInventors: Wenxuan Zhang, Jamshed Jalal, Mark David Werkheiser, Sakshi Verma, Ritukar Khanna, Devi Sravanthi Yalamarthy, Gurunath Ramagiri, Mukesh Patel, Tushar P Ringe
-
Publication number: 20230418766Abstract: Aspects of the present disclosure relate to an apparatus comprising processing circuitry, first cache circuitry and second cache circuitry, wherein the second cache circuitry has an access latency higher than an access latency of the first cache circuitry. The second cache circuitry is responsive to receiving a request for data stored within the second cache circuitry to identify said data as pseudo-invalid data and provide said data to the first cache circuitry. The second cache circuitry is responsive to receiving an eviction indication, indicating that the first cache circuitry is to evict said data, to, responsive to determining that said data has not been modified since said data was provided to the first cache circuitry, identify said pseudo-invalid data as valid data.Type: ApplicationFiled: November 18, 2021Publication date: December 28, 2023Inventors: Joseph Michael PUSDESRIS, Klas Magnus BRUCE, Jamshed JALAL, Dimitrios KASERIDIS, Gurunath RAMAGIRI, Ho-Seop KIM, Andrew John TURNER, Rania Hussein Hassan MAMEESH
-
Publication number: 20230221866Abstract: A technique for handling memory access requests is described. An apparatus has an interconnect for coupling a plurality of requester elements with a plurality of slave elements. The requester elements are arranged to issue memory access requests for processing by the slave elements. An intermediate element within the interconnect acts as a point of serialisation to order the memory access requests issued by requester elements via the intermediate element. The intermediate element has tracking circuitry for tracking handling of the memory access requests accepted by the intermediate element. Further, request acceptance management circuitry is provided to identify a target slave element amongst the plurality of slave elements for that given memory access request, and to determine whether the given memory access request is to be accepted by the intermediate element dependent on an indication of bandwidth capability for the target slave element.Type: ApplicationFiled: May 20, 2021Publication date: July 13, 2023Inventors: Jamshed JALAL, Gurunath RAMAGIRI, Tushar P RINGE, Mark David WERKHEISER, Ashok Kumar TUMMALA, Dimitrios KASERIDIS
-
Patent number: 11593025Abstract: A request node is provided comprising request circuitry to issue write requests to write data to storage circuitry. The write requests are issued to the storage circuitry via a coherency node. Status receiving circuitry receives a write status regarding write operations at the storage circuitry from the coherency node and throttle circuitry throttles a rate at which the write requests are issued to the storage circuitry in dependence on the write status. A coherency node is also provided, comprising access circuitry to receive a write request from a request node to write data to storage circuitry and to access the storage circuitry to write the data to the storage circuitry. Receive circuitry receives, from the storage circuitry, an incoming write status regarding write operations at the storage circuitry and transmit circuitry transmits an outgoing write status to the request node based on the incoming write status.Type: GrantFiled: January 15, 2020Date of Patent: February 28, 2023Assignee: Arm LimitedInventors: Gurunath Ramagiri, Jamshed Jalal, Mark David Werkheiser, Tushar P Ringe, Klas Magnus Bruce, Ritukar Khanna
-
Patent number: 11573918Abstract: Aspects of the present disclosure relate to an interconnect comprising interfaces to communicate with respective requester and receiver node devices, and home nodes. Each home node is configured to: receive requests from one or more requester nodes, each request comprising a target address corresponding to a target receiver nodes; and transmit each said request to the corresponding target receiver node. Mapping circuitry is configured to: associate each of said plurality of home nodes with a given home node cluster; perform a first hashing of the target address of a given request, to determine a target cluster; perform a second hashing of the target address, to determine a target home node within said target cluster; and direct the given message, to the target home node.Type: GrantFiled: July 20, 2021Date of Patent: February 7, 2023Assignee: Arm LimitedInventors: Mark David Werkheiser, Sai Kumar Marri, Lauren Elise Guckert, Gurunath Ramagiri, Jamshed Jalal
-
Publication number: 20230029897Abstract: Aspects of the present disclosure relate to an interconnect comprising interfaces to communicate with respective requester and receiver node devices, and home nodes. Each home node is configured to: receive requests from one or more requester nodes, each request comprising a target address corresponding to a target receiver nodes; and transmit each said request to the corresponding target receiver node. Mapping circuitry is configured to: associate each of said plurality of home nodes with a given home node cluster; perform a first hashing of the target address of a given request, to determine a target cluster; perform a second hashing of the target address, to determine a target home node within said target cluster; and direct the given message, to the target home node.Type: ApplicationFiled: July 20, 2021Publication date: February 2, 2023Inventors: Mark David WERKHEISER, Sai Kumar MARRI, Lauren Elise GUCKERT, Gurunath RAMAGIRI, Jamshed JALAL
-
Patent number: 11550720Abstract: Entries in a cluster-to-caching agent map table of a data processing network identify one or more caching agents in a caching agent cluster. A snoop filter cache stores coherency information that includes coherency status information and a presence vector, where a bit position in the presence vector is associated with a caching agent cluster in the cluster-to-caching agent map table. In response to a data request, a presence vector in the snoop filter cache is accessed to identify a caching agent cluster and the map table is accessed to identify target caching agents for snoop messages. In order to reduce message traffic, snoop message are sent only to the identified targets.Type: GrantFiled: November 24, 2020Date of Patent: January 10, 2023Assignee: Arm LimitedInventors: Gurunath Ramagiri, Jamshed Jalal, Mark David Werkheiser, Tushar P Ringe, Mukesh Patel, Sakshi Verma
-
Patent number: 11431649Abstract: The present disclosure advantageously provides a method and system for allocating shared resources for an interconnect. A request is received at a home node from a request node over an interconnect, where the request represents a beginning of a transaction with a resource in communication with the home node, and the request has a traffic class defined by a user-configurable mapping based on one or more transaction attributes. The traffic class of the request is determined. A resource capability for the traffic class is determined based on user configurable traffic class-based resource capability data. Whether a home node transaction table has an available entry for the request is determined based on the resource capability for the traffic class.Type: GrantFiled: March 26, 2021Date of Patent: August 30, 2022Assignee: Arm LimitedInventors: Mukesh Patel, Jamshed Jalal, Gurunath Ramagiri, Tushar P Ringe, Mark David Werkheiser
-
Publication number: 20220164288Abstract: Entries in a cluster-to-caching agent map table of a data processing network identify one or more caching agents in a caching agent cluster. A snoop filter cache stores coherency information that includes coherency status information and a presence vector, where a bit position in the presence vector is associated with a caching agent cluster in the cluster-to-caching agent map table. In response to a data request, a presence vector in the snoop filter cache is accessed to identify a caching agent cluster and the map table is accessed to identify target caching agents for snoop messages. In order to reduce message traffic, snoop message are sent only to the identified targets.Type: ApplicationFiled: November 24, 2020Publication date: May 26, 2022Applicant: Arm LimitedInventors: Gurunath Ramagiri, Jamshed Jalal, Mark David Werkheiser, Tushar P. Ringe, Mukesh Patel, Sakshi Verma
-
Patent number: 11256646Abstract: An apparatus and method are provided for handling ordered transactions. The apparatus has a plurality of completer elements to process transactions, a requester element to issue a sequence of ordered transactions, and an interconnect providing, for each completer element, a communication channel between that completer element and the requester element for transfer of signals between that completer element and the requester element in either direction. A given completer element that is processing a given transaction in the sequence is arranged to issue a response signal to the requester element over its associated communication channel that comprises an ordered channel indication to identify whether the associated communication channel has an ordered channel property. The ordered channel property guarantees that processing of transactions issued by the requester element over the associated communication channel in a given order will be completed by the given completer element in the same given order.Type: GrantFiled: November 15, 2019Date of Patent: February 22, 2022Assignee: Arm LimitedInventors: Tushar P Ringe, Jamshed Jalal, Gurunath Ramagiri, Ashok Kumar Tummala, Mark David Werkheiser
-
Patent number: 11086802Abstract: A technique is provided for routing access requests within an interconnect. An apparatus provides a plurality of requester elements for issuing access requests, and a slave element to be accessed in response to the access requests. An interconnect is used to couple the plurality of requester elements with the slave element, and provides an intermediate element that acts as a point of serialisation to order the access requests issued by the plurality of requester elements via the intermediate element. Communication channels are provided within the interconnect to support communication between each of the requester elements and the intermediate element, and between the intermediate element and the slave element.Type: GrantFiled: March 16, 2020Date of Patent: August 10, 2021Assignee: Arm LimitedInventors: Jamshed Jalal, Tushar P. Ringe, Mark David Werkheiser, Gurunath Ramagiri
-
Publication number: 20210216241Abstract: A request node is provided comprising request circuitry to issue write requests to write data to storage circuitry. The write requests are issued to the storage circuitry via a coherency node. Status receiving circuitry receives a write status regarding write operations at the storage circuitry from the coherency node and throttle circuitry throttles a rate at which the write requests are issued to the storage circuitry in dependence on the write status. A coherency node is also provided, comprising access circuitry to receive a write request from a request node to write data to storage circuitry and to access the storage circuitry to write the data to the storage circuitry. Receive circuitry receives, from the storage circuitry, an incoming write status regarding write operations at the storage circuitry and transmit circuitry transmits an outgoing write status to the request node based on the incoming write status.Type: ApplicationFiled: January 15, 2020Publication date: July 15, 2021Inventors: Gurunath RAMAGIRI, Jamshed JALAL, Mark David WERKHEISER, Tushar P. RINGE, Klas Magnus BRUCE, Ritukar KHANNA
-
Publication number: 20210149833Abstract: An apparatus and method are provided for handling ordered transactions. The apparatus has a plurality of completer elements to process transactions, a requester element to issue a sequence of ordered transactions, and an interconnect providing, for each completer element, a communication channel between that completer element and the requester element for transfer of signals between that completer element and the requester element in either direction. A given completer element that is processing a given transaction in the sequence is arranged to issue a response signal to the requester element over its associated communication channel that comprises an ordered channel indication to identify whether the associated communication channel has an ordered channel property. The ordered channel property guarantees that processing of transactions issued by the requester element over the associated communication channel in a given order will be completed by the given completer element in the same given order.Type: ApplicationFiled: November 15, 2019Publication date: May 20, 2021Inventors: Tushar P. RINGE, Jamshed JALAL, Gurunath RAMAGIRI, Ashok Kumar TUMMALA, Mark David WERKHEISER
-
Patent number: 10942865Abstract: A method and apparatus are provided to enable snoop forwarding to occur together with memory protection. A data processing apparatus in, for instance, the form of a home node forwards a snoop forwarding request on behalf of a requester to a target, the snoop forwarding request being capable of indicating one or more access permissions of the target in relation to the data. A further data processing apparatus in the form of, for instance, a receiver node may receive the snoop forwarding request and based on its own permissions that are provided in the snoop forwarding request, together with the state of the data, either provide a response back to the requester or the home node. In a still further data processing apparatus in the form of, for instance, a Memory Protection Unit (MPU), a regular snoop forwarding request made to a target in relation to data can be forwarded to the target or demoted to a non-forwarding snoop request based on the permissions of the target in relation to the data at the MPU.Type: GrantFiled: June 13, 2019Date of Patent: March 9, 2021Assignee: Arm LimitedInventors: Gurunath Ramagiri, Tushar P. Ringe, Mukesh Patel, Jamshed Jalal, Iat Pui Chan, Lakshmi Joga Vishnu Vardhan Badukonda
-
Patent number: 10877904Abstract: A system, apparatus and method for protecting coherent memory contents in a coherent data processing network by filtering data access requests and snoop response based on the Read/Write (R/W) access permissions. Requests are augmented with access permissions in memory protection units and the access permissions are used to control memory access by home nodes of the network.Type: GrantFiled: March 22, 2019Date of Patent: December 29, 2020Assignee: Arm LimitedInventors: Gurunath Ramagiri, Tushar P. Ringe, Mukesh Patel, Jamshed Jalal, Ashok Kumar Tummala, Mark David Werkheiser
-
Publication number: 20200394141Abstract: A method and apparatus are provided to enable snoop forwarding to occur together with memory protection. A data processing apparatus in, for instance, the form of a home node forwards a snoop forwarding request on behalf of a requester to a target, the snoop forwarding request being capable of indicating one or more access permissions of the target in relation to the data. A further data processing apparatus in the form of, for instance, a receiver node may receive the snoop forwarding request and based on its own permissions that are provided in the snoop forwarding request, together with the state of the data, either provide a response back to the requester or the home node. In a still further data processing apparatus in the form of, for instance, a Memory Protection Unit (MPU), a regular snoop forwarding request made to a target in relation to data can be forwarded to the target or demoted to a non-forwarding snoop request based on the permissions of the target in relation to the data at the MPU.Type: ApplicationFiled: June 13, 2019Publication date: December 17, 2020Applicant: Arm LimitedInventors: Gurunath Ramagiri, Tushar P. Ringe, Mukesh Patel, Jamshed Jalal, Iat Pui Chan, Lakshmi Joga Vishnu Vardhan Badukonda
-
Publication number: 20200301854Abstract: A system, apparatus and method for protecting coherent memory contents in a coherent data processing network by filtering data access requests and snoop response based on the Read/Write (R/W) access permissions. Requests are augmented with access permissions in memory protection units and the access permissions are used to control memory access by home nodes of the network.Type: ApplicationFiled: March 22, 2019Publication date: September 24, 2020Applicant: Arm LimitedInventors: Gurunath Ramagiri, Tushar P. Ringe, Mukesh Patel, Jamshed Jalal, Ashok Kumar Tummala, Mark David Werkheiser
-
Patent number: 10698825Abstract: In a system-on-chip there is a local interconnect to connect local devices on the chip to one another, a gateway to connect the chip to a remote chip of a plurality of chips in a cache-coherent multi-chip system via an inter-chip interconnect, and a cache-coherent device. The cache-coherent device has a cache-coherency look-up table having entries for shared cache data lines. When a data access request is received via the inter-chip interconnect and the local interconnect a system-unique identifier for a request source of the data access request is generated in dependence on an inter-chip request source identifier used on the inter-chip interconnect and an identifier indicative of the remote chip. The bit-set used to express the system-unique identifier is larger than the bit-set used to express the inter-chip request source identifier.Type: GrantFiled: March 12, 2019Date of Patent: June 30, 2020Assignee: Arm LimitedInventors: Gurunath Ramagiri, Ashok Kumar Tummala, Mark David Werkheiser, Jamshed Jalal, Premkishore Shivakumar, Paul Gilbert Meyer