Patents by Inventor Michael Filippo
Michael Filippo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11663014Abstract: A data processing apparatus is provided that comprises fetch circuitry to fetch an instruction stream comprising a plurality of instructions, including a status updating instruction, from storage circuitry. Status storage circuitry stores a status value. Execution circuitry executes the instructions, wherein at least some of the instructions are executed in an order other than in the instruction stream. For the status updating instruction, the execution circuitry is adapted to update the status value based on execution of the status updating instruction. Flush circuitry flushes, when the status storage circuitry is updated, following instructions that appear after the status updating instruction in the instruction stream.Type: GrantFiled: August 26, 2019Date of Patent: May 30, 2023Assignee: ARM LIMITEDInventors: Abhishek Raja, Rakesh Shaji Lal, Michael Filippo, Glen Andrew Harris, Vasu Kudaravalli, Huzefa Moiz Sanjeliwala, Jason Setter
-
Patent number: 11409530Abstract: A system, apparatus and method for ordering a sequence of processing transactions. The method includes accessing, from a memory, a program sequence of operations that are to be executed. Instructions are received, some of them having an identifier, or mnemonic, that is used to distinguish those identified operations from other operations that do not have an identifier, or mnemonic. The mnemonic indicates a distribution of the execution of the program sequence of operations. The program sequence of operations is grouped based on the mnemonic such that certain operations are separated from other operations.Type: GrantFiled: August 16, 2018Date of Patent: August 9, 2022Assignee: Arm LimitedInventors: Curtis Glenn Dunham, Pavel Shamis, Jamshed Jalal, Michael Filippo
-
Patent number: 11392378Abstract: Circuitry comprises an instruction decoder to decode a gather load instruction having a vector operand comprising a plurality of vector entries, in which each vector entry defines, at least in part, a respective address from which data is to be loaded; the instruction decoder being configured to generate a set of load operations relating to respective individual addresses in dependence upon the vector operand, each of the set of load operations having a respective identifier which is unique with respect to other load operations in the set, and control circuitry to maintain a data item for the gather load instruction, the data item including a count value representing a number of load operations in the set of load operations awaiting issue for execution; and execution circuitry to execute the set of load operations; the control circuitry being configured, in response to a detection from the count value of the data item associated with a given gather load instruction that the set of load operations generated foType: GrantFiled: July 25, 2019Date of Patent: July 19, 2022Assignee: Arm LimitedInventors: Abhishek Raja, Michael Filippo, Huzefa Moiz Sanjeliwala, Kelvin Domnic Goveas
-
Patent number: 11327791Abstract: An apparatus provides an issue queue having a first section and a second section. Each entry in each section stores operation information identifying an operation to be performed. Allocation circuitry allocates each item of received operation information to an entry in the first section or the second section. Selection circuitry selects from the issue queue, during a given selection iteration, an operation from amongst the operations whose required source operands are available. Availability update circuitry updates source operand availability for each entry whose operation information identifies as a source operand a destination operand of the selected operation in the given selection iteration. A deferral mechanism inhibits from selection, during a next selection iteration, any operation associated with an entry in the second section whose source operands are now available due to that operation having as a source operand the destination operand of the selected operation in the given selection iteration.Type: GrantFiled: August 21, 2019Date of Patent: May 10, 2022Assignee: Arm LimitedInventors: Michael David Achenbach, Robert Greg McDonald, Nicholas Andrew Pfister, Kelvin Domnic Goveas, Michael Filippo, . Abhishek Raja, Zachary Allen Kingsbury
-
Patent number: 11314648Abstract: Data processing apparatus comprises a data access requesting node; data access circuitry to receive a data access request from the data access requesting node and to route the data access request for fulfilment by one or more data storage nodes selected from a group of two or more data storage nodes; and indication circuitry to provide a source indication to the data access requesting node, to indicate an attribute of the one or more data storage nodes which fulfilled the data access request; the data access requesting node being configured to vary its operation in response to the source indication.Type: GrantFiled: February 8, 2017Date of Patent: April 26, 2022Assignee: Arm LimitedInventors: Michael Filippo, Jamshed Jalal, Kias Magnus Bruce, Alex James Waugh, Geoffray Lacourba, Paul Gilbert Meyer, Bruce James Mathewson, Phanindra Kumar Mannava
-
Patent number: 11263138Abstract: An apparatus is provided that includes cache circuitry that comprises a plurality of cache lines. The cache circuitry treats one or more of the cache lines as trace lines each having correlated addresses and each being tagged by a trigger address. Prefetch circuitry causes data at the correlated addresses stored in the trace lines to be prefetched.Type: GrantFiled: October 31, 2018Date of Patent: March 1, 2022Assignee: Arm LimitedInventors: Joseph Michael Pusdesris, Miles Robert Dooley, Michael Filippo
-
Patent number: 11256623Abstract: Apparatus and a corresponding method of operating a hub device, and a target device, in a coherent interconnect system are presented. A cache pre-population request of a set of coherency protocol transactions in the system is received from a requesting master device specifying at least one data item and the hub device responds by cause a cache pre-population trigger of the set of coherency protocol transactions specifying the at least one data item to be transmitted to a target device. This trigger can cause the target device to request that the specified at least one data item is retrieved and brought into cache. Since the target device can therefore decide whether to respond to the trigger or not, it does not receive cached data unsolicited, simplifying its configuration, whilst still allowing some data to be pre-cached.Type: GrantFiled: February 8, 2017Date of Patent: February 22, 2022Assignee: ARM LIMITEDInventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal, Klas Magnus Bruce, Michael Filippo, Paul Gilbert Meyer, Alex James Waugh, Geoffray Matthieu Lacourba
-
Patent number: 11237974Abstract: A data processing apparatus is provided. The data processing apparatus includes fetch circuitry to fetch instructions from storage circuitry. Decode circuitry decodes each of the instructions into one or more operations and provides the one or more operations to one or more execution units. The decode circuitry is adapted to decode at least one of the instructions into a plurality of operations. Cache circuitry caches the one or more operations and at least one entry of the cache circuitry is a compressed entry that represents the plurality of operations.Type: GrantFiled: August 27, 2019Date of Patent: February 1, 2022Assignee: Arm LimitedInventors: Michael Brian Schinzler, Michael Filippo
-
Patent number: 11200177Abstract: A data processing system (2) incorporates a first exclusive cache memory (8, 10) and a second exclusive cache memory (14). A snoop filter (18) located together with the second exclusive cache memory on one side of the communication interface (12) serves to track entries within the first exclusive cache memory. The snoop filter includes retention data storage circuitry to store retention data for controlling retention of cache entries within the second exclusive cache memory. Retention data transfer circuitry (20) serves to transfer the retention data to and from the retention data storage circuitry within the snoop filter and the second cache memory as the cache entries concerned are transferred between the second exclusive cache memory and the first exclusive cache memory.Type: GrantFiled: October 19, 2016Date of Patent: December 14, 2021Assignee: ARM LIMITEDInventors: Alex James Waugh, Dimitrios Kaseridis, Klas Magnus Bruce, Michael Filippo, Joseph Michael Pusdesris, Jamshed Jalal
-
Patent number: 11003454Abstract: Apparatuses for data processing and methods of data processing are provided. A data processing apparatus performs data processing operations in response to a sequence of instructions including performing speculative execution of at least some of the sequence of instructions. In response to a branch instruction the data processing apparatus predicts whether or not the branch is taken or not taken further speculative instruction execution is based on that prediction. A path speculation cost is calculated in dependence on a number of recently flushed instructions and a rate at which speculatively executed instructions are issued may be modified based on the path speculation cost.Type: GrantFiled: July 17, 2019Date of Patent: May 11, 2021Assignee: Arm LimitedInventors: Michael Brian Schinzler, Michael Filippo, Yasuo Ishii
-
Patent number: 10983916Abstract: A data processing apparatus is provided that includes a plurality of storage elements. Receiving circuitry receives a plurality of incoming data beats from cache circuitry and stores the incoming data beats in the storage elements. At least one existing data beat in the storage elements is replaced by an equal number of the incoming data beats belonging to a different cache line of the cache circuitry. The existing data beats stored in said plurality of storage elements form an incomplete cache line.Type: GrantFiled: March 1, 2017Date of Patent: April 20, 2021Assignee: ARM LimitedInventors: Huzefa Moiz Sanjeliwala, Klas Magnus Bruce, Leigang Kou, Michael Filippo, Miles Robert Dooley, Matthew Andrew Rafacz
-
Publication number: 20210064533Abstract: A data processing apparatus is provided. The data processing apparatus includes fetch circuitry to fetch instructions from storage circuitry. Decode circuitry decodes each of the instructions into one or more operations and provides the one or more operations to one or more execution units. The decode circuitry is adapted to decode at least one of the instructions into a plurality of operations. Cache circuitry caches the one or more operations and at least one entry of the cache circuitry is a compressed entry that represents the plurality of operations.Type: ApplicationFiled: August 27, 2019Publication date: March 4, 2021Inventors: Michael Brian SCHINZLER, Michael FILIPPO
-
Publication number: 20210064377Abstract: A data processing apparatus is provided that comprises fetch circuitry to fetch an instruction stream comprising a plurality of instructions, including a status updating instruction, from storage circuitry. Status storage circuitry stores a status value. Execution circuitry executes the instructions, wherein at least some of the instructions are executed in an order other than in the instruction stream. For the status updating instruction, the execution circuitry is adapted to update the status value based on execution of the status updating instruction. Flush circuitry flushes, when the status storage circuitry is updated, flushed instructions that appear after the status updating instruction in the instruction stream.Type: ApplicationFiled: August 26, 2019Publication date: March 4, 2021Inventors: . ABHISHEK RAJA, Rakesh Shaji LAL, Michael FILIPPO, Glen Andrew HARRIS, Vasu KUDARAVALLI, Huzefa Moiz SANJELIWALA, Jason SETTER
-
Publication number: 20210055962Abstract: An apparatus and method are provided for operating an issue queue. The issue queue has a first section and a second section, where each of those sections comprises a number of entries, and where each entry is employed to store operation information identifying an operation to be performed by a processing unit. Allocation circuitry determines, for each item of received operation information, whether to allocate that operation information to an entry in the first section or to an entry in the second section. The operation information identifies not only the associated operation, but also each source operand required by the associated operation and availability of each source operand. Selection circuitry selects from the issue queue, during a given selection iteration, an operation to be issued to the processing unit, and selects that operation from amongst the operations whose required source operands are available.Type: ApplicationFiled: August 21, 2019Publication date: February 25, 2021Inventors: Michael David ACHENBACH, Robert Greg MCDONALD, Nicholas Andrew PFISTER, Kelvin Domnic GOVEAS, Michael FILIPPO, . ABHISHEK RAJA, Zachary Allen KINGSBURY
-
Publication number: 20210026627Abstract: Circuitry comprises an instruction decoder to decode a gather load instruction having a vector operand comprising a plurality of vector entries, in which each vector entry defines, at least in part, a respective address from which data is to be loaded; the instruction decoder being configured to generate a set of load operations relating to respective individual addresses in dependence upon the vector operand, each of the set of load operations having a respective identifier which is unique with respect to other load operations in the set, and control circuitry to maintain a data item for the gather load instruction, the data item including a count value representing a number of load operations in the set of load operations awaiting issue for execution; and execution circuitry to execute the set of load operations; the control circuitry being configured, in response to a detection from the count value of the data item associated with a given gather load instruction that the set of load operations generated foType: ApplicationFiled: July 25, 2019Publication date: January 28, 2021Inventors: . ABHISHEK RAJA, Michael FILIPPO, Huzefa Moiz SANJELIWALA, Kelvin Domnic GOVEAS
-
Publication number: 20210019150Abstract: Apparatuses for data processing and methods of data processing are provided. A data processing apparatus performs data processing operations in response to a sequence of instructions including performing speculative execution of at least some of the sequence of instructions. In response to a branch instruction the data processing apparatus predicts whether or not the branch is taken or not taken further speculative instruction execution is based on that prediction. A path speculation cost is calculated in dependence on a number of recently flushed instructions and a rate at which speculatively executed instructions are issued may be modified based on the path speculation cost.Type: ApplicationFiled: July 17, 2019Publication date: January 21, 2021Inventors: Michael Brian SCHINZLER, Michael FILIPPO, Yasuo ISHII
-
Patent number: 10817298Abstract: An apparatus comprises a branch target buffer (BTB) to store predicted target addresses of branch instructions. In response to a fetch block address identifying a fetch block comprising two or more program instructions, the BTB performs a lookup to identify whether it stores one or more predicted target addresses for one or more branch instructions in the fetch block. When the BTB is identified in the lookup as storing predicted target addresses for more than one branch instruction in said fetch block, branch target selecting circuitry selects a next fetch block address from among the multiple predicted target addresses returned in the lookup. A shortcut path bypassing the branch target selecting circuitry is provided to forward a predicted target address identified in the lookup as the next fetch block address when a predetermined condition is satisfied.Type: GrantFiled: October 27, 2016Date of Patent: October 27, 2020Assignee: ARM LimitedInventors: Yasuo Ishii, Michael Filippo, Muhammad Umar Farooq
-
Patent number: 10761987Abstract: An apparatus and method are provided for processing ownership upgrade requests in relation to cached data. The apparatus has a plurality of processing units, at least some of which have associated cache storage. A coherent interconnect couples the plurality of master units with memory, the coherent interconnect having a snoop unit used to implement a cache coherency protocol when a request received by the coherent interconnect identifies a cacheable memory address within the memory. Contention management circuitry is provided to control contended access to a memory address by two or more processing units within the plurality of processing units.Type: GrantFiled: November 28, 2018Date of Patent: September 1, 2020Assignee: Arm LimitedInventors: Jamshed Jalal, Mark David Werkheiser, Michael Filippo, Klas Magnus Bruce, Paul Gilbert Meyer
-
Patent number: 10754687Abstract: There is provided a data processing apparatus that includes processing circuitry for executing a plurality of instructions. Storage circuitry stores a plurality of entries, each entry relating to an instruction in the plurality of instructions and including a dependency field. The dependency field stores a data dependency of that instruction on a previous instruction in the plurality of instructions. Scheduling circuitry schedules the execution of the plurality of instructions in an order that depends on each data dependency. When the previous instruction is a single-cycle instruction, the dependency field includes a reference to one of the entries that relates to the previous instruction, otherwise, the data dependency field includes an indication of an output destination of the previous instruction.Type: GrantFiled: June 12, 2018Date of Patent: August 25, 2020Assignee: Arm LimitedInventors: . Abhishek Raja, Chris Abernathy, Michael Filippo
-
Patent number: 10713187Abstract: A memory controller comprises memory access circuitry configured to initiate a data access of data stored in a memory in response to a data access hint message received from another node in data communication with the memory controller; to access data stored in the memory in response to a data access request received from another node in data communication with the memory controller and to provide the accessed data as a data access response to the data access request.Type: GrantFiled: July 25, 2019Date of Patent: July 14, 2020Assignee: ARM LimitedInventors: Michael Filippo, Jamshed Jalal, Klas Magnus Bruce, Paul Gilbert Meyer, David Joseph Hawkins, Phanindra Kumar Mannava, Joseph Michael Pusdesris