Patents by Inventor Jamshed Jalal

Jamshed Jalal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11074206
    Abstract: The present disclosure advantageously provides a method and system for transferring data over at least one interconnect. A request node, coupled to an interconnect, receives a first write burst from a first device over a first connection, divides the first write burst into an ordered sequence of smaller write requests based on the size of the first write burst, and sends the ordered sequence of write requests to a home node coupled to the interconnect. The home node generates an ordered sequence of write transactions based on the ordered sequence of write requests, and sends the ordered sequence of write transactions to a write combiner coupled to the home node. The write combiner combines the ordered sequence of write transactions into a second write burst that is the same size as the first write burst, and sends the second write burst to a second device over a second connection.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: July 27, 2021
    Assignee: Arm Limited
    Inventors: Jamshed Jalal, Tushar P Ringe, Kishore Kumar Jagadeesha, Ashok Kumar Tummala, Rishabh Jain, Devi Sravanthi Yalamarthy
  • Publication number: 20210216241
    Abstract: A request node is provided comprising request circuitry to issue write requests to write data to storage circuitry. The write requests are issued to the storage circuitry via a coherency node. Status receiving circuitry receives a write status regarding write operations at the storage circuitry from the coherency node and throttle circuitry throttles a rate at which the write requests are issued to the storage circuitry in dependence on the write status. A coherency node is also provided, comprising access circuitry to receive a write request from a request node to write data to storage circuitry and to access the storage circuitry to write the data to the storage circuitry. Receive circuitry receives, from the storage circuitry, an incoming write status regarding write operations at the storage circuitry and transmit circuitry transmits an outgoing write status to the request node based on the incoming write status.
    Type: Application
    Filed: January 15, 2020
    Publication date: July 15, 2021
    Inventors: Gurunath RAMAGIRI, Jamshed JALAL, Mark David WERKHEISER, Tushar P. RINGE, Klas Magnus BRUCE, Ritukar KHANNA
  • Patent number: 11055250
    Abstract: An apparatus to is provided, to be used with an interconnect comprising a home node. The apparatus includes general-purpose storage circuitry and specialised storage circuitry. Transfer circuitry performs a non-forwardable transfer of a data item from the general-purpose storage circuitry to the specialised storage circuitry. Transmit circuitry transmits an offer to the home node, at a time of the non-forwardable transfer, to transfer the data item to the home node. The apparatus is inhibited from forwarding the data item from the specialised storage circuitry to the home node.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: July 6, 2021
    Assignee: Arm Limited
    Inventors: Phanindra Kumar Mannava, Bruce James Mathewson, Klas Magnus Bruce, Damien Guillaume Pierre Payet, Jamshed Jalal, Alex James Waugh
  • Publication number: 20210149833
    Abstract: An apparatus and method are provided for handling ordered transactions. The apparatus has a plurality of completer elements to process transactions, a requester element to issue a sequence of ordered transactions, and an interconnect providing, for each completer element, a communication channel between that completer element and the requester element for transfer of signals between that completer element and the requester element in either direction. A given completer element that is processing a given transaction in the sequence is arranged to issue a response signal to the requester element over its associated communication channel that comprises an ordered channel indication to identify whether the associated communication channel has an ordered channel property. The ordered channel property guarantees that processing of transactions issued by the requester element over the associated communication channel in a given order will be completed by the given completer element in the same given order.
    Type: Application
    Filed: November 15, 2019
    Publication date: May 20, 2021
    Inventors: Tushar P. RINGE, Jamshed JALAL, Gurunath RAMAGIRI, Ashok Kumar TUMMALA, Mark David WERKHEISER
  • Publication number: 20210126877
    Abstract: An improved protocol for data transfer between a request node and a home node of a data processing network that includes a number of devices coupled via an interconnect fabric is provided that minimizes the number of response messages transported through the interconnect fabric. When congestion is detected in the interconnect fabric, a home node sends a combined response to a write request from a request node. The response is delayed until a data buffer is available at the home node and home node has completed an associated coherence action. When the request node receives a combined response, the data to be written and the acknowledgment are coalesced in the data message.
    Type: Application
    Filed: May 2, 2019
    Publication date: April 29, 2021
    Applicant: Arm Limited
    Inventors: Jamshed Jalal, Tushar P Ringe, Phanindra Kumar Mannava, Dimitrios Kaseridis
  • Publication number: 20210103525
    Abstract: An apparatus and method are provided for handling cache maintenance operations. The apparatus has a plurality of requester elements for issuing requests and at least one completer element for processing such requests. A cache hierarchy is provided having a plurality of levels of cache to store cached copies of data associated with addresses in memory. A requester element may be arranged to issue a cache maintenance operation request specifying a memory address range in order to cause a block of data associated with the specified memory address range to be pushed through at least one level of the cache hierarchy to a determined visibility point in order to make that block of data visible to one or more other requester elements.
    Type: Application
    Filed: October 3, 2019
    Publication date: April 8, 2021
    Inventors: Phanindra Kumar MANNAVA, Bruce James MATHEWSON, Jamshed JALAL
  • Publication number: 20210103524
    Abstract: Circuitry comprises a set of two or more data handling nodes each having respective storage circuitry to hold data; and a home node to serialise data access operations and to control coherency amongst data held by the one or more data handling nodes so that data written to a memory address is consistent with data read from that memory address in response to a subsequent access request; in which: a requesting node of the set of data handling nodes is configured to communicate a request to the home node for exclusive access to a given instance of data at a given memory address; and the home node is configured, in response to the request, to communicate information to other data handling nodes of the set of data handling nodes to control handling, by those other data handling nodes, of any further instances of the data at the given memory address which are held by those other data handling nodes.
    Type: Application
    Filed: October 8, 2019
    Publication date: April 8, 2021
    Inventors: Bruce James MATHEWSON, Phanindra Kumar MANNAVA, Jamshed JALAL, Klas Magnus BRUCE, Andrew John TURNER
  • Publication number: 20210103543
    Abstract: An apparatus to is provided, to be used with an interconnect comprising a home node. The apparatus includes general-purpose storage circuitry and specialised storage circuitry. Transfer circuitry performs a non-forwardable transfer of a data item from the general-purpose storage circuitry to the specialised storage circuitry. Transmit circuitry transmits an offer to the home node, at a time of the non-forwardable transfer, to transfer the data item to the home node. The apparatus is inhibited from forwarding the data item from the specialised storage circuitry to the home node.
    Type: Application
    Filed: October 4, 2019
    Publication date: April 8, 2021
    Inventors: Phanindra Kumar MANNAVA, Bruce James MATHEWSON, Klas Magnus BRUCE, Damien Guillaume Pierre PAYET, Jamshed JALAL, Alex James WAUGH
  • Publication number: 20210103460
    Abstract: Apparatuses, methods of operating apparatuses, interconnects for connecting apparatuses to one another, and methods of operating the interconnects are disclosed. A master apparatus can issue an individual all-zero-data write transaction specifying a data storage location to the interconnect, which conveys the individual all-zero-data write transaction to a target device which writes all-zero-data at the data storage location. No write data is conveyed with the individual all-zero-data write transaction, so that the individual all-zero-data write transaction may be used to clear the data storage location without adding to congestion of a write data channel in the interconnect.
    Type: Application
    Filed: October 4, 2019
    Publication date: April 8, 2021
    Inventors: Jamshed JALAL, Mark David WERKHEISER, Phanindra Kumar MANNAVA, Bruce James MATHEWSON
  • Patent number: 10970225
    Abstract: An apparatus and method are provided for handling cache maintenance operations. The apparatus has a plurality of requester elements for issuing requests and at least one completer element for processing such requests. A cache hierarchy is provided having a plurality of levels of cache to store cached copies of data associated with addresses in memory. A requester element may be arranged to issue a cache maintenance operation request specifying a memory address range in order to cause a block of data associated with the specified memory address range to be pushed through at least one level of the cache hierarchy to a determined visibility point in order to make that block of data visible to one or more other requester elements.
    Type: Grant
    Filed: October 3, 2019
    Date of Patent: April 6, 2021
    Assignee: Arm Limited
    Inventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal
  • Patent number: 10942865
    Abstract: A method and apparatus are provided to enable snoop forwarding to occur together with memory protection. A data processing apparatus in, for instance, the form of a home node forwards a snoop forwarding request on behalf of a requester to a target, the snoop forwarding request being capable of indicating one or more access permissions of the target in relation to the data. A further data processing apparatus in the form of, for instance, a receiver node may receive the snoop forwarding request and based on its own permissions that are provided in the snoop forwarding request, together with the state of the data, either provide a response back to the requester or the home node. In a still further data processing apparatus in the form of, for instance, a Memory Protection Unit (MPU), a regular snoop forwarding request made to a target in relation to data can be forwarded to the target or demoted to a non-forwarding snoop request based on the permissions of the target in relation to the data at the MPU.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: March 9, 2021
    Assignee: Arm Limited
    Inventors: Gurunath Ramagiri, Tushar P. Ringe, Mukesh Patel, Jamshed Jalal, Iat Pui Chan, Lakshmi Joga Vishnu Vardhan Badukonda
  • Publication number: 20210058335
    Abstract: The present disclosure advantageously provides a system and method for protocol layer tunneling for a data processing system. A system includes an interconnect, a request node coupled to the interconnect, and a home node coupled to the interconnect. The request node includes a request node processor, and the home node includes a home node processor. The request node processor is configured to send, to the home node, a sequence of dynamic requests, receive a sequence of retry requests associated with the sequence of dynamic requests, and send a sequence of static requests associated with the sequence of dynamic requests in response to receiving credit grants from the home node. The home node processor is configured to send the sequence of retry requests in response to receiving the sequence of dynamic requests, determine the credit grants, and send the credit grants.
    Type: Application
    Filed: August 23, 2019
    Publication date: February 25, 2021
    Applicant: Arm Limited
    Inventors: Tushar P. Ringe, Jamshed Jalal, Kishore Kumar Jagadeesha
  • Patent number: 10917198
    Abstract: In a data processing network comprising one or more Request Nodes and a Home Node coupled via a coherent interconnect, a Request Node requests data from the Home Node. The requested data is sent, via the interconnect, to the Request Node in a plurality of data beats, where a first data beat of the plurality of data beats is received at a first time and a last data beat is received at a second time. Responsive to receiving the first data beat, the Request Node sends an acknowledgement message to the Home Node. Upon receipt of the acknowledgement message, the Home Node frees resources allocated to the read transaction. In addition, the Home Node is configured to allow snoop requests for the data to the Request Node to be sent to the Request Node before all beats of the requested data have been received by the Request Node.
    Type: Grant
    Filed: July 5, 2018
    Date of Patent: February 9, 2021
    Assignee: Arm Limited
    Inventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal, Tushar P. Ringe
  • Patent number: 10877836
    Abstract: A fault tolerant data processing network includes a number of nodes intercoupled through an interconnect circuit. The micro-architectures of the nodes are configured for sending and receiving messages via the interconnect circuit. In operation, a first Request Node sends a read request to a Home Node. In response, the Home Node initiates transmission of the requested data to the first Request Node. When the first Request Node detects that a fault has occurred, it sends a negative-acknowledgement message to the first Home Node. In response, the Home Node again initiates transmission of the requested data to the first Request Node. The requested data may be transmitted from a local cache of a second Request Node or transmitted by a Slave Node after being retrieved from a memory. The data may be transmitted to the first Request Node via the Home Node or directly via the interconnect.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: December 29, 2020
    Assignee: Arm Limited
    Inventors: Zheng Xu, Jamshed Jalal
  • Patent number: 10877904
    Abstract: A system, apparatus and method for protecting coherent memory contents in a coherent data processing network by filtering data access requests and snoop response based on the Read/Write (R/W) access permissions. Requests are augmented with access permissions in memory protection units and the access permissions are used to control memory access by home nodes of the network.
    Type: Grant
    Filed: March 22, 2019
    Date of Patent: December 29, 2020
    Assignee: Arm Limited
    Inventors: Gurunath Ramagiri, Tushar P. Ringe, Mukesh Patel, Jamshed Jalal, Ashok Kumar Tummala, Mark David Werkheiser
  • Publication number: 20200394141
    Abstract: A method and apparatus are provided to enable snoop forwarding to occur together with memory protection. A data processing apparatus in, for instance, the form of a home node forwards a snoop forwarding request on behalf of a requester to a target, the snoop forwarding request being capable of indicating one or more access permissions of the target in relation to the data. A further data processing apparatus in the form of, for instance, a receiver node may receive the snoop forwarding request and based on its own permissions that are provided in the snoop forwarding request, together with the state of the data, either provide a response back to the requester or the home node. In a still further data processing apparatus in the form of, for instance, a Memory Protection Unit (MPU), a regular snoop forwarding request made to a target in relation to data can be forwarded to the target or demoted to a non-forwarding snoop request based on the permissions of the target in relation to the data at the MPU.
    Type: Application
    Filed: June 13, 2019
    Publication date: December 17, 2020
    Applicant: Arm Limited
    Inventors: Gurunath Ramagiri, Tushar P. Ringe, Mukesh Patel, Jamshed Jalal, Iat Pui Chan, Lakshmi Joga Vishnu Vardhan Badukonda
  • Patent number: 10853271
    Abstract: An apparatus includes a first device configured to generate a transaction request targeted to a first address, a switch, coupled to the first device and configured to the route the transaction request, a port coupled to the peripheral switch and the data processing network, and a system memory management unit, coupled to the port. The system memory management unit is configured for receiving an address query for the first address from the peripheral port translating the first address to a second address, accessing attributes of a device associated with the second address and responding to the query. Access validation for the transaction request is confirmed or denied dependent upon the second address and the attributes of the device associated with the second address. The first device may be a peripheral device, the switch may be a peripheral switch and the port may be a peripheral port.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: December 1, 2020
    Assignee: Arm Limited
    Inventors: Tessil Thomas, Jamshed Jalal, Andrea Pellegrini, Anitha Kona
  • Patent number: 10795820
    Abstract: Apparatus and a corresponding method of operating the apparatus, in a coherent interconnect system comprising a requesting master device and a data-storing slave device, are provided. The apparatus maintains records of coherency protocol transactions received from the requesting master device whilst completion of the coherency protocol transactions are pending and is responsive to reception of a read transaction from the requesting master device for a data item stored in the data-storing slave device to issue a direct memory transfer request to the data-storing slave device. A read acknowledgement trigger is added to the direct memory transfer request and in response to reception of a read acknowledgement signal from the data-storing slave device a record created by reception of the read transaction is updated corresponding to completion of the direct memory transfer request.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: October 6, 2020
    Assignee: ARM Limited
    Inventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal, Tushar P Ringe
  • Publication number: 20200301854
    Abstract: A system, apparatus and method for protecting coherent memory contents in a coherent data processing network by filtering data access requests and snoop response based on the Read/Write (R/W) access permissions. Requests are augmented with access permissions in memory protection units and the access permissions are used to control memory access by home nodes of the network.
    Type: Application
    Filed: March 22, 2019
    Publication date: September 24, 2020
    Applicant: Arm Limited
    Inventors: Gurunath Ramagiri, Tushar P. Ringe, Mukesh Patel, Jamshed Jalal, Ashok Kumar Tummala, Mark David Werkheiser
  • Patent number: 10783080
    Abstract: An interconnect system and method of operating the system are disclosed. A master device has access to a cache and a slave device has an associated data storage device for long-term storage of data items. The master device can initiate a cache maintenance operation in the interconnect system with respect to a data item temporarily stored in the cache causing action to be taken by the slave device with respect to storage of the data item in the data storage device. For long latency operations the master device can issue a separated cache maintenance request specifying the data item and the slave device. In response an intermediate device signals an acknowledgment response indicating that it has taken on responsibility for completion of the cache maintenance operation and issues the separated cache maintenance request to the slave device.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: September 22, 2020
    Assignee: ARM LIMITED
    Inventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal, Paul Gilbert Meyer