Patents by Inventor Bruce James Mathewson

Bruce James Mathewson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240095183
    Abstract: An apparatus and method are provided for storing a plurality of translation entries in a cache, each translation entry corresponding to one of a plurality of page table entries and defining a translation between a first address and a second address, and encoding control information indicative of an attribute of each page table entry; returning, in response to a lookup querying a first lookup address, a corresponding second address when the first lookup address corresponds to one of the plurality of translation entries stored in the cache; modifying at least some of the control information in response to notification of a modification of the attribute in a page table entry; and retaining in the cache at least one translation entry corresponding to the page table entry for use in a subsequent address lookup querying a corresponding first lookup address in response to the notification of the modification of the attribute in the page table entry.
    Type: Application
    Filed: February 2, 2022
    Publication date: March 21, 2024
    Applicant: Arm Limited
    Inventors: Carlos Garcia-Tobin, Bruce James Mathewson, Matthew Lucien Evans, Richard Roy Grisenthwaite
  • Patent number: 11599467
    Abstract: The present disclosure advantageously provides a system cache and a method for storing coherent data and non-coherent data in a system cache. A transaction is received from a source in a system, the transaction including at least a memory address, the source having a location in a coherent domain or a non-coherent domain of the system, the coherent domain including shareable data and the non-coherent domain including non-shareable data. Whether the memory address is stored in a cache line is determined, and, when the memory address is not determined to be stored in a cache line, a cache line is allocated to the transaction including setting a state bit of the allocated cache line based on the source location to indicate whether shareable or non-shareable data is stored in the allocated cache line, and the transaction is processed.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: March 7, 2023
    Assignee: Arm Limited
    Inventors: Jamshed Jalal, Bruce James Mathewson, Tushar P Ringe, Sean James Salisbury, Antony John Harris
  • Patent number: 11537543
    Abstract: An apparatus and method are provided for handling protocol conversion. The apparatus has interconnect circuitry for routing messages between components coupled to the interconnect circuitry in a manner that conforms to a first communication protocol. Protocol conversion circuitry is coupled between the interconnect circuitry and an external communication path, for converting messages between the first communication protocol and a second communication protocol that has a layered architecture comprising multiple layers. The protocol conversion circuitry has a gateway component forming one of the components coupled to the interconnect circuitry, and a controller coupled with the gateway component and used to control connection with the external communication path.
    Type: Grant
    Filed: March 2, 2021
    Date of Patent: December 27, 2022
    Assignee: Arm Limited
    Inventors: Ashok Kumar Tummala, Jamshed Jalal, Antony John Harris, Jeffrey Carl Defilippi, Anitha Kona, Bruce James Mathewson
  • Publication number: 20220382679
    Abstract: The present disclosure advantageously provides a system cache and a method for storing coherent data and non-coherent data in a system cache. A transaction is received from a source in a system, the transaction including at least a memory address, the source having a location in a coherent domain or a non-coherent domain of the system, the coherent domain including shareable data and the non-coherent domain including non-shareable data. Whether the memory address is stored in a cache line is determined, and, when the memory address is not determined to be stored in a cache line, a cache line is allocated to the transaction including setting a state bit of the allocated cache line based on the source location to indicate whether shareable or non-shareable data is stored in the allocated cache line, and the transaction is processed.
    Type: Application
    Filed: May 27, 2021
    Publication date: December 1, 2022
    Applicant: Arm Limited
    Inventors: Jamshed Jalal, Bruce James Mathewson, Tushar P Ringe, Sean James Salisbury, Antony John Harris
  • Publication number: 20220283972
    Abstract: An apparatus and method are provided for handling protocol conversion. The apparatus has interconnect circuitry for routing messages between components coupled to the interconnect circuitry in a manner that conforms to a first communication protocol. Protocol conversion circuitry is coupled between the interconnect circuitry and an external communication path, for converting messages between the first communication protocol and a second communication protocol that has a layered architecture comprising multiple layers. The protocol conversion circuitry has a gateway component forming one of the components coupled to the interconnect circuitry, and a controller coupled with the gateway component and used to control connection with the external communication path.
    Type: Application
    Filed: March 2, 2021
    Publication date: September 8, 2022
    Inventors: Ashok Kumar TUMMALA, Jamshed JALAL, Antony John HARRIS, Jeffrey Carl DEFILIPPI, Anitha KONA, Bruce James MATHEWSON
  • Patent number: 11314675
    Abstract: A data processing system comprises a master node to initiate data transmissions; one or more slave nodes to receive the data transmissions; and a home node to control coherency amongst data stored by the data processing system; in which at least one data transmission from the master node to one of the one or more slave nodes bypasses the home node.
    Type: Grant
    Filed: October 22, 2019
    Date of Patent: April 26, 2022
    Assignee: Arm Limited
    Inventors: Guanghui Geng, Andrew David Tune, Daniel Adam Sara, Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal
  • Patent number: 11314648
    Abstract: Data processing apparatus comprises a data access requesting node; data access circuitry to receive a data access request from the data access requesting node and to route the data access request for fulfilment by one or more data storage nodes selected from a group of two or more data storage nodes; and indication circuitry to provide a source indication to the data access requesting node, to indicate an attribute of the one or more data storage nodes which fulfilled the data access request; the data access requesting node being configured to vary its operation in response to the source indication.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: April 26, 2022
    Assignee: Arm Limited
    Inventors: Michael Filippo, Jamshed Jalal, Kias Magnus Bruce, Alex James Waugh, Geoffray Lacourba, Paul Gilbert Meyer, Bruce James Mathewson, Phanindra Kumar Mannava
  • Patent number: 11269773
    Abstract: Circuitry comprises a set of two or more data handling nodes each having respective storage circuitry to hold data; and a home node to serialise data access operations and to control coherency amongst data held by the one or more data handling nodes so that data written to a memory address is consistent with data read from that memory address in response to a subsequent access request; in which: a requesting node of the set of data handling nodes is configured to communicate a request to the home node for exclusive access to a given instance of data at a given memory address; and the home node is configured, in response to the request, to communicate information to other data handling nodes of the set of data handling nodes to control handling, by those other data handling nodes, of any further instances of the data at the given memory address which are held by those other data handling nodes.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: March 8, 2022
    Assignee: Arm Limited
    Inventors: Bruce James Mathewson, Phanindra Kumar Mannava, Jamshed Jalal, Klas Magnus Bruce, Andrew John Turner
  • Patent number: 11263137
    Abstract: A method and apparatus is disclosed for transferring data from a first processor core to a second processor core. The first processor core executes a stash instruction having a first operand associated with a data address of the data. A second processor core is determined to be a stash target for a stash message, based on the data address or a second operand. A stash message is sent to the second processor core, notifying the second processor core of the written data. Responsive to receiving the stash message, the second processor core can opt to store the data in its cache. The data may be included in the stash message or retrieved in response to a read request by the second processing core. The second processor core may be determined by prediction based, at least in part, on monitored data transactions.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: March 1, 2022
    Assignee: Arm Limited
    Inventors: Jose Alberto Joao, Tiago Rogerio Muck, Joshua Randall, Alejandro Rico Carro, Bruce James Mathewson
  • Patent number: 11256623
    Abstract: Apparatus and a corresponding method of operating a hub device, and a target device, in a coherent interconnect system are presented. A cache pre-population request of a set of coherency protocol transactions in the system is received from a requesting master device specifying at least one data item and the hub device responds by cause a cache pre-population trigger of the set of coherency protocol transactions specifying the at least one data item to be transmitted to a target device. This trigger can cause the target device to request that the specified at least one data item is retrieved and brought into cache. Since the target device can therefore decide whether to respond to the trigger or not, it does not receive cached data unsolicited, simplifying its configuration, whilst still allowing some data to be pre-cached.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: February 22, 2022
    Assignee: ARM LIMITED
    Inventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal, Klas Magnus Bruce, Michael Filippo, Paul Gilbert Meyer, Alex James Waugh, Geoffray Matthieu Lacourba
  • Publication number: 20210374059
    Abstract: A method and apparatus is disclosed for transferring data from a first processor core to a second processor core. The first processor core executes a stash instruction having a first operand associated with a data address of the data. A second processor core is determined to be a stash target for a stash message, based on the data address or a second operand. A stash message is sent to the second processor core, notifying the second processor core of the written data. Responsive to receiving the stash message, the second processor core can opt to store the data in its cache. The data may be included in the stash message or retrieved in response to a read request by the second processing core. The second processor core may be determined by prediction based, at least in part, on monitored data transactions.
    Type: Application
    Filed: May 27, 2020
    Publication date: December 2, 2021
    Applicant: Arm Limited
    Inventors: Jose Alberto Joao, Tiago Rogerio Muck, Joshua Randall, Alejandro Rico Carro, Bruce James Mathewson
  • Patent number: 11188377
    Abstract: Apparatuses, methods of operating apparatuses, interconnects for connecting apparatuses to one another, and methods of operating the interconnects are disclosed. A master apparatus can issue an individual all-zero-data write transaction specifying a data storage location to the interconnect, which conveys the individual all-zero-data write transaction to a target device which writes all-zero-data at the data storage location. No write data is conveyed with the individual all-zero-data write transaction, so that the individual all-zero-data write transaction may be used to clear the data storage location without adding to congestion of a write data channel in the interconnect.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: November 30, 2021
    Assignee: Arm Limited
    Inventors: Jamshed Jalal, Mark David Werkheiser, Phanindra Kumar Mannava, Bruce James Mathewson
  • Patent number: 11159636
    Abstract: A data processing apparatus is provided, which includes receiving circuitry to receive a snoop request in respect of requested data on behalf of a requesting node. The snoop request includes an indication as to whether forwarding is to occur. Transmitting circuitry transmits a response to the snoop request and cache circuitry caches at least one data value. When forwarding is to occur and the at least one data value includes the requested data, the response includes the requested data and the transmitting circuitry transmits the response to the requesting node.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: October 26, 2021
    Assignee: ARM LIMITED
    Inventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal, Klas Magnus Bruce
  • Patent number: 11144458
    Abstract: An apparatus (2) comprises processing circuitry (4) for performing data processing in response to instructions. The processing circuitry (4) supports a cache maintenance instruction (50) specifying a virtual page address (52) identifying a virtual page of a virtual address space. In response to the cache maintenance instruction, the processing circuitry (4) triggers at least one cache (18, 20, 22) to perform a cache maintenance operation on one or more cache lines for which a physical address of the data stored by the cache line is within a physical page that corresponds to the virtual page identified by the virtual page address provided by the cache maintenance instruction.
    Type: Grant
    Filed: January 12, 2016
    Date of Patent: October 12, 2021
    Assignee: ARM LIMITED
    Inventors: Jason Parker, Bruce James Mathewson, Matthew Lucien Evans
  • Patent number: 11055250
    Abstract: An apparatus to is provided, to be used with an interconnect comprising a home node. The apparatus includes general-purpose storage circuitry and specialised storage circuitry. Transfer circuitry performs a non-forwardable transfer of a data item from the general-purpose storage circuitry to the specialised storage circuitry. Transmit circuitry transmits an offer to the home node, at a time of the non-forwardable transfer, to transfer the data item to the home node. The apparatus is inhibited from forwarding the data item from the specialised storage circuitry to the home node.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: July 6, 2021
    Assignee: Arm Limited
    Inventors: Phanindra Kumar Mannava, Bruce James Mathewson, Klas Magnus Bruce, Damien Guillaume Pierre Payet, Jamshed Jalal, Alex James Waugh
  • Publication number: 20210103525
    Abstract: An apparatus and method are provided for handling cache maintenance operations. The apparatus has a plurality of requester elements for issuing requests and at least one completer element for processing such requests. A cache hierarchy is provided having a plurality of levels of cache to store cached copies of data associated with addresses in memory. A requester element may be arranged to issue a cache maintenance operation request specifying a memory address range in order to cause a block of data associated with the specified memory address range to be pushed through at least one level of the cache hierarchy to a determined visibility point in order to make that block of data visible to one or more other requester elements.
    Type: Application
    Filed: October 3, 2019
    Publication date: April 8, 2021
    Inventors: Phanindra Kumar MANNAVA, Bruce James MATHEWSON, Jamshed JALAL
  • Publication number: 20210103524
    Abstract: Circuitry comprises a set of two or more data handling nodes each having respective storage circuitry to hold data; and a home node to serialise data access operations and to control coherency amongst data held by the one or more data handling nodes so that data written to a memory address is consistent with data read from that memory address in response to a subsequent access request; in which: a requesting node of the set of data handling nodes is configured to communicate a request to the home node for exclusive access to a given instance of data at a given memory address; and the home node is configured, in response to the request, to communicate information to other data handling nodes of the set of data handling nodes to control handling, by those other data handling nodes, of any further instances of the data at the given memory address which are held by those other data handling nodes.
    Type: Application
    Filed: October 8, 2019
    Publication date: April 8, 2021
    Inventors: Bruce James MATHEWSON, Phanindra Kumar MANNAVA, Jamshed JALAL, Klas Magnus BRUCE, Andrew John TURNER
  • Publication number: 20210103460
    Abstract: Apparatuses, methods of operating apparatuses, interconnects for connecting apparatuses to one another, and methods of operating the interconnects are disclosed. A master apparatus can issue an individual all-zero-data write transaction specifying a data storage location to the interconnect, which conveys the individual all-zero-data write transaction to a target device which writes all-zero-data at the data storage location. No write data is conveyed with the individual all-zero-data write transaction, so that the individual all-zero-data write transaction may be used to clear the data storage location without adding to congestion of a write data channel in the interconnect.
    Type: Application
    Filed: October 4, 2019
    Publication date: April 8, 2021
    Inventors: Jamshed JALAL, Mark David WERKHEISER, Phanindra Kumar MANNAVA, Bruce James MATHEWSON
  • Publication number: 20210103543
    Abstract: An apparatus to is provided, to be used with an interconnect comprising a home node. The apparatus includes general-purpose storage circuitry and specialised storage circuitry. Transfer circuitry performs a non-forwardable transfer of a data item from the general-purpose storage circuitry to the specialised storage circuitry. Transmit circuitry transmits an offer to the home node, at a time of the non-forwardable transfer, to transfer the data item to the home node. The apparatus is inhibited from forwarding the data item from the specialised storage circuitry to the home node.
    Type: Application
    Filed: October 4, 2019
    Publication date: April 8, 2021
    Inventors: Phanindra Kumar MANNAVA, Bruce James MATHEWSON, Klas Magnus BRUCE, Damien Guillaume Pierre PAYET, Jamshed JALAL, Alex James WAUGH
  • Publication number: 20210103493
    Abstract: A requester issues a request specifying a target address indicating an addressed location in a memory system. A completer responds to the request. Tag error checking circuitry performs a tag error checking operation when the request issued by the requester is a tag-error-checking request specifying an address tag. The tag error checking operation comprises determining whether the address tag matches an allocation tag stored in the memory system associated with a block of one or more addresses comprising the target address specified by the tag-error-checking request. The requester and the completer communicate via a memory interface having at least one data signal path to exchange read data or write data between the requester and the completer; and at least one tag signal path, provided in parallel with the at least one data signal path, to exchange address tags or allocation tags between the requester and the completer.
    Type: Application
    Filed: October 7, 2019
    Publication date: April 8, 2021
    Inventors: Bruce James MATHEWSON, Phanindra Kumar MANNAVA, Michael Andrew CAMPBELL, Alexander Alfred HORNUNG, Alex James WAUGH, Klas Magnus BRUCE, Richard Roy GRISENTHWAITE