Patents by Inventor Phanindra Kumar Mannava

Phanindra Kumar Mannava has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230236992
    Abstract: In response to determining circuitry determining that a portion of data to be sent to a recipient over an interconnect has a predetermined value, data sending circuitry performs data elision to: omit sending at least one data FLIT corresponding to the portion of data having the predetermined value; and send a data-elision-specifying FLIT specifying data-elision information indicating to the recipient that sending of the at least one data FLIT has been omitted and that the recipient can proceed assuming the portion of data has the predetermined value. The data-elision-specifying FLIT is a FLIT other than a write request FLIT for initiating a memory write transaction sequence. This helps to conserve data FLIT bandwidth for other data not having the predetermined value.
    Type: Application
    Filed: January 21, 2022
    Publication date: July 27, 2023
    Inventors: Klas Magnus BRUCE, Jamshed JALAL, Håkan Lars-Göran PERSSON, Phanindra Kumar MANNAVA
  • Patent number: 11483260
    Abstract: An improved protocol for data transfer between a request node and a home node of a data processing network that includes a number of devices coupled via an interconnect fabric is provided that minimizes the number of response messages transported through the interconnect fabric. When congestion is detected in the interconnect fabric, a home node sends a combined response to a write request from a request node. The response is delayed until a data buffer is available at the home node and home node has completed an associated coherence action. When the request node receives a combined response, the data to be written and the acknowledgment are coalesced in the data message.
    Type: Grant
    Filed: May 2, 2019
    Date of Patent: October 25, 2022
    Assignee: Arm Limited
    Inventors: Jamshed Jalal, Tushar P Ringe, Phanindra Kumar Mannava, Dimitrios Kaseridis
  • Patent number: 11314648
    Abstract: Data processing apparatus comprises a data access requesting node; data access circuitry to receive a data access request from the data access requesting node and to route the data access request for fulfilment by one or more data storage nodes selected from a group of two or more data storage nodes; and indication circuitry to provide a source indication to the data access requesting node, to indicate an attribute of the one or more data storage nodes which fulfilled the data access request; the data access requesting node being configured to vary its operation in response to the source indication.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: April 26, 2022
    Assignee: Arm Limited
    Inventors: Michael Filippo, Jamshed Jalal, Kias Magnus Bruce, Alex James Waugh, Geoffray Lacourba, Paul Gilbert Meyer, Bruce James Mathewson, Phanindra Kumar Mannava
  • Patent number: 11314675
    Abstract: A data processing system comprises a master node to initiate data transmissions; one or more slave nodes to receive the data transmissions; and a home node to control coherency amongst data stored by the data processing system; in which at least one data transmission from the master node to one of the one or more slave nodes bypasses the home node.
    Type: Grant
    Filed: October 22, 2019
    Date of Patent: April 26, 2022
    Assignee: Arm Limited
    Inventors: Guanghui Geng, Andrew David Tune, Daniel Adam Sara, Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal
  • Patent number: 11269773
    Abstract: Circuitry comprises a set of two or more data handling nodes each having respective storage circuitry to hold data; and a home node to serialise data access operations and to control coherency amongst data held by the one or more data handling nodes so that data written to a memory address is consistent with data read from that memory address in response to a subsequent access request; in which: a requesting node of the set of data handling nodes is configured to communicate a request to the home node for exclusive access to a given instance of data at a given memory address; and the home node is configured, in response to the request, to communicate information to other data handling nodes of the set of data handling nodes to control handling, by those other data handling nodes, of any further instances of the data at the given memory address which are held by those other data handling nodes.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: March 8, 2022
    Assignee: Arm Limited
    Inventors: Bruce James Mathewson, Phanindra Kumar Mannava, Jamshed Jalal, Klas Magnus Bruce, Andrew John Turner
  • Patent number: 11256623
    Abstract: Apparatus and a corresponding method of operating a hub device, and a target device, in a coherent interconnect system are presented. A cache pre-population request of a set of coherency protocol transactions in the system is received from a requesting master device specifying at least one data item and the hub device responds by cause a cache pre-population trigger of the set of coherency protocol transactions specifying the at least one data item to be transmitted to a target device. This trigger can cause the target device to request that the specified at least one data item is retrieved and brought into cache. Since the target device can therefore decide whether to respond to the trigger or not, it does not receive cached data unsolicited, simplifying its configuration, whilst still allowing some data to be pre-cached.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: February 22, 2022
    Assignee: ARM LIMITED
    Inventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal, Klas Magnus Bruce, Michael Filippo, Paul Gilbert Meyer, Alex James Waugh, Geoffray Matthieu Lacourba
  • Patent number: 11188377
    Abstract: Apparatuses, methods of operating apparatuses, interconnects for connecting apparatuses to one another, and methods of operating the interconnects are disclosed. A master apparatus can issue an individual all-zero-data write transaction specifying a data storage location to the interconnect, which conveys the individual all-zero-data write transaction to a target device which writes all-zero-data at the data storage location. No write data is conveyed with the individual all-zero-data write transaction, so that the individual all-zero-data write transaction may be used to clear the data storage location without adding to congestion of a write data channel in the interconnect.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: November 30, 2021
    Assignee: Arm Limited
    Inventors: Jamshed Jalal, Mark David Werkheiser, Phanindra Kumar Mannava, Bruce James Mathewson
  • Patent number: 11159636
    Abstract: A data processing apparatus is provided, which includes receiving circuitry to receive a snoop request in respect of requested data on behalf of a requesting node. The snoop request includes an indication as to whether forwarding is to occur. Transmitting circuitry transmits a response to the snoop request and cache circuitry caches at least one data value. When forwarding is to occur and the at least one data value includes the requested data, the response includes the requested data and the transmitting circuitry transmits the response to the requesting node.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: October 26, 2021
    Assignee: ARM LIMITED
    Inventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal, Klas Magnus Bruce
  • Patent number: 11055250
    Abstract: An apparatus to is provided, to be used with an interconnect comprising a home node. The apparatus includes general-purpose storage circuitry and specialised storage circuitry. Transfer circuitry performs a non-forwardable transfer of a data item from the general-purpose storage circuitry to the specialised storage circuitry. Transmit circuitry transmits an offer to the home node, at a time of the non-forwardable transfer, to transfer the data item to the home node. The apparatus is inhibited from forwarding the data item from the specialised storage circuitry to the home node.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: July 6, 2021
    Assignee: Arm Limited
    Inventors: Phanindra Kumar Mannava, Bruce James Mathewson, Klas Magnus Bruce, Damien Guillaume Pierre Payet, Jamshed Jalal, Alex James Waugh
  • Publication number: 20210126877
    Abstract: An improved protocol for data transfer between a request node and a home node of a data processing network that includes a number of devices coupled via an interconnect fabric is provided that minimizes the number of response messages transported through the interconnect fabric. When congestion is detected in the interconnect fabric, a home node sends a combined response to a write request from a request node. The response is delayed until a data buffer is available at the home node and home node has completed an associated coherence action. When the request node receives a combined response, the data to be written and the acknowledgment are coalesced in the data message.
    Type: Application
    Filed: May 2, 2019
    Publication date: April 29, 2021
    Applicant: Arm Limited
    Inventors: Jamshed Jalal, Tushar P Ringe, Phanindra Kumar Mannava, Dimitrios Kaseridis
  • Publication number: 20210103524
    Abstract: Circuitry comprises a set of two or more data handling nodes each having respective storage circuitry to hold data; and a home node to serialise data access operations and to control coherency amongst data held by the one or more data handling nodes so that data written to a memory address is consistent with data read from that memory address in response to a subsequent access request; in which: a requesting node of the set of data handling nodes is configured to communicate a request to the home node for exclusive access to a given instance of data at a given memory address; and the home node is configured, in response to the request, to communicate information to other data handling nodes of the set of data handling nodes to control handling, by those other data handling nodes, of any further instances of the data at the given memory address which are held by those other data handling nodes.
    Type: Application
    Filed: October 8, 2019
    Publication date: April 8, 2021
    Inventors: Bruce James MATHEWSON, Phanindra Kumar MANNAVA, Jamshed JALAL, Klas Magnus BRUCE, Andrew John TURNER
  • Publication number: 20210103525
    Abstract: An apparatus and method are provided for handling cache maintenance operations. The apparatus has a plurality of requester elements for issuing requests and at least one completer element for processing such requests. A cache hierarchy is provided having a plurality of levels of cache to store cached copies of data associated with addresses in memory. A requester element may be arranged to issue a cache maintenance operation request specifying a memory address range in order to cause a block of data associated with the specified memory address range to be pushed through at least one level of the cache hierarchy to a determined visibility point in order to make that block of data visible to one or more other requester elements.
    Type: Application
    Filed: October 3, 2019
    Publication date: April 8, 2021
    Inventors: Phanindra Kumar MANNAVA, Bruce James MATHEWSON, Jamshed JALAL
  • Publication number: 20210103543
    Abstract: An apparatus to is provided, to be used with an interconnect comprising a home node. The apparatus includes general-purpose storage circuitry and specialised storage circuitry. Transfer circuitry performs a non-forwardable transfer of a data item from the general-purpose storage circuitry to the specialised storage circuitry. Transmit circuitry transmits an offer to the home node, at a time of the non-forwardable transfer, to transfer the data item to the home node. The apparatus is inhibited from forwarding the data item from the specialised storage circuitry to the home node.
    Type: Application
    Filed: October 4, 2019
    Publication date: April 8, 2021
    Inventors: Phanindra Kumar MANNAVA, Bruce James MATHEWSON, Klas Magnus BRUCE, Damien Guillaume Pierre PAYET, Jamshed JALAL, Alex James WAUGH
  • Publication number: 20210103460
    Abstract: Apparatuses, methods of operating apparatuses, interconnects for connecting apparatuses to one another, and methods of operating the interconnects are disclosed. A master apparatus can issue an individual all-zero-data write transaction specifying a data storage location to the interconnect, which conveys the individual all-zero-data write transaction to a target device which writes all-zero-data at the data storage location. No write data is conveyed with the individual all-zero-data write transaction, so that the individual all-zero-data write transaction may be used to clear the data storage location without adding to congestion of a write data channel in the interconnect.
    Type: Application
    Filed: October 4, 2019
    Publication date: April 8, 2021
    Inventors: Jamshed JALAL, Mark David WERKHEISER, Phanindra Kumar MANNAVA, Bruce James MATHEWSON
  • Publication number: 20210103493
    Abstract: A requester issues a request specifying a target address indicating an addressed location in a memory system. A completer responds to the request. Tag error checking circuitry performs a tag error checking operation when the request issued by the requester is a tag-error-checking request specifying an address tag. The tag error checking operation comprises determining whether the address tag matches an allocation tag stored in the memory system associated with a block of one or more addresses comprising the target address specified by the tag-error-checking request. The requester and the completer communicate via a memory interface having at least one data signal path to exchange read data or write data between the requester and the completer; and at least one tag signal path, provided in parallel with the at least one data signal path, to exchange address tags or allocation tags between the requester and the completer.
    Type: Application
    Filed: October 7, 2019
    Publication date: April 8, 2021
    Inventors: Bruce James MATHEWSON, Phanindra Kumar MANNAVA, Michael Andrew CAMPBELL, Alexander Alfred HORNUNG, Alex James WAUGH, Klas Magnus BRUCE, Richard Roy GRISENTHWAITE
  • Patent number: 10970225
    Abstract: An apparatus and method are provided for handling cache maintenance operations. The apparatus has a plurality of requester elements for issuing requests and at least one completer element for processing such requests. A cache hierarchy is provided having a plurality of levels of cache to store cached copies of data associated with addresses in memory. A requester element may be arranged to issue a cache maintenance operation request specifying a memory address range in order to cause a block of data associated with the specified memory address range to be pushed through at least one level of the cache hierarchy to a determined visibility point in order to make that block of data visible to one or more other requester elements.
    Type: Grant
    Filed: October 3, 2019
    Date of Patent: April 6, 2021
    Assignee: Arm Limited
    Inventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal
  • Patent number: 10963409
    Abstract: An interconnect circuit, and method of operation of such an interconnect circuit, are provided. The interconnect circuitry has a first interface for coupling to a master device and a second interface for coupling to a slave device. Transactions are performed between the master device and the slave device, where each transaction comprises or more transfers, and each transfer comprises a request and a response. A first connection path between the first interface and the second interface is provided that comprises a first plurality of pipeline stages. The first connection path forms a default path for propagation of the requests and responses of the transfers. A second connection path is also provided between the first interface and the second interface that comprises a second plurality of pipeline stages, where the second plurality is less than the first plurality. Path selection circuitry is then used to determine presence of a fast path condition.
    Type: Grant
    Filed: August 19, 2016
    Date of Patent: March 30, 2021
    Assignee: Arm Limited
    Inventors: Rowan Nigel Naylor, Phanindra Kumar Mannava, Bruce James Mathewson
  • Patent number: 10949292
    Abstract: A requester issues a request specifying a target address indicating an addressed location in a memory system. A completer responds to the request. Tag error checking circuitry performs a tag error checking operation when the request issued by the requester is a tag-error-checking request specifying an address tag. The tag error checking operation comprises determining whether the address tag matches an allocation tag stored in the memory system associated with a block of one or more addresses comprising the target address specified by the tag-error-checking request. The requester and the completer communicate via a memory interface having at least one data signal path to exchange read data or write data between the requester and the completer; and at least one tag signal path, provided in parallel with the at least one data signal path, to exchange address tags or allocation tags between the requester and the completer.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: March 16, 2021
    Assignee: Arm Limited
    Inventors: Bruce James Mathewson, Phanindra Kumar Mannava, Michael Andrew Campbell, Alexander Alfred Hornung, Alex James Waugh, Klas Magnus Bruce, Richard Roy Grisenthwaite
  • Patent number: 10917198
    Abstract: In a data processing network comprising one or more Request Nodes and a Home Node coupled via a coherent interconnect, a Request Node requests data from the Home Node. The requested data is sent, via the interconnect, to the Request Node in a plurality of data beats, where a first data beat of the plurality of data beats is received at a first time and a last data beat is received at a second time. Responsive to receiving the first data beat, the Request Node sends an acknowledgement message to the Home Node. Upon receipt of the acknowledgement message, the Home Node frees resources allocated to the read transaction. In addition, the Home Node is configured to allow snoop requests for the data to the Request Node to be sent to the Request Node before all beats of the requested data have been received by the Request Node.
    Type: Grant
    Filed: July 5, 2018
    Date of Patent: February 9, 2021
    Assignee: Arm Limited
    Inventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal, Tushar P. Ringe
  • Patent number: 10795820
    Abstract: Apparatus and a corresponding method of operating the apparatus, in a coherent interconnect system comprising a requesting master device and a data-storing slave device, are provided. The apparatus maintains records of coherency protocol transactions received from the requesting master device whilst completion of the coherency protocol transactions are pending and is responsive to reception of a read transaction from the requesting master device for a data item stored in the data-storing slave device to issue a direct memory transfer request to the data-storing slave device. A read acknowledgement trigger is added to the direct memory transfer request and in response to reception of a read acknowledgement signal from the data-storing slave device a record created by reception of the read transaction is updated corresponding to completion of the direct memory transfer request.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: October 6, 2020
    Assignee: ARM Limited
    Inventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal, Tushar P Ringe