Patents by Inventor Alexander Alfred Hornung
Alexander Alfred Hornung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240386094Abstract: An apparatus has processing circuitry to execute instructions and address prediction storage circuitry to store address prediction information for use in predicting upcoming instructions to be executed by the processing circuitry. The processing circuitry is responsive to an instruction to generate a pointer signature for a pointer to generate the pointer signature for the pointer based on an address of the pointer and a cryptographic key. The address prediction storage circuitry is also configured to store address prediction information for the pointer, the address prediction information including the pointer. The processing circuitry is responsive to an instruction to authenticate a given pointer to obtain, based on the address prediction information for the given pointer, a predicted pointer signature; compare the predicted pointer signature with a pointer signature identified by the instruction to authenticate; and responsive to the comparing detecting a match, determine that the given pointer is valid.Type: ApplicationFiled: July 7, 2022Publication date: November 21, 2024Applicant: Arm LimitedInventors: Alexander Alfred Hornung, Ian Michael Caulfield
-
Patent number: 12099447Abstract: Prefetch circuitry generates, based on stream prefetch state information, prefetch requests for prefetching data to at least one cache. Cache control circuitry controls, based on cache policy information associated with cache entries in a given level of cache, at least one of cache entry replacement in the given level of cache, and allocation of data evicted from the given level of cache to a further level of cache. The stream prefetch state information specifies, for at least one stream of addresses, information representing an address access pattern for generating addresses to be specified by a corresponding series of prefetch requests. Cache policy information for at least one prefetched cache entry of the given level of cache (to which data is prefetched for a given stream of addresses) is set to a value dependent on at least one stream property associated with the given stream of addresses.Type: GrantFiled: October 13, 2022Date of Patent: September 24, 2024Assignee: Arm LimitedInventors: Alexander Alfred Hornung, Roberto Gattuso
-
Patent number: 12050805Abstract: An apparatus supports decoding and execution of a bulk memory instruction specifying a block size parameter. The apparatus comprises control circuitry to determine whether the block size corresponding to the block size parameter exceeds a predetermined threshold, and performs a micro-architectural control action to influence the handling of at least one bulk memory operation by memory operation processing circuitry. The micro-architectural control action varies depending on whether the block size exceeds the predetermined threshold, and further depending on the states of other components and operations within or coupled with the apparatus. The micro-architectural control action could include an alignment correction action, cache allocation control action, or processing circuitry selection action.Type: GrantFiled: July 28, 2022Date of Patent: July 30, 2024Assignee: Arm LimitedInventors: Ian Michael Caulfield, Abhishek Raja, Alexander Alfred Hornung
-
Publication number: 20240126697Abstract: Prefetch circuitry generates, based on stream prefetch state information, prefetch requests for prefetching data to at least one cache. Cache control circuitry controls, based on cache policy information associated with cache entries in a given level of cache, at least one of cache entry replacement in the given level of cache, and allocation of data evicted from the given level of cache to a further level of cache. The stream prefetch state information specifies, for at least one stream of addresses, information representing an address access pattern for generating addresses to be specified by a corresponding series of prefetch requests. Cache policy information for at least one prefetched cache entry of the given level of cache (to which data is prefetched for a given stream of addresses) is set to a value dependent on at least one stream property associated with the given stream of addresses.Type: ApplicationFiled: October 13, 2022Publication date: April 18, 2024Inventors: Alexander Alfred HORNUNG, Roberto GATTUSO
-
Patent number: 11907130Abstract: An apparatus comprising a cache comprising a plurality of cache entries, cache access circuitry responsive to a cache access request to perform, based on a target memory address associated with the cache access request, a cache lookup operation, tracking circuitry to track pending requests to modify cache entries of the cache, and prediction circuitry responsive to the cache access request to make a prediction of whether the pending requests tracked by the tracking circuitry include a pending request to modify a cache entry associated with the target memory address, wherein the cache access circuitry is responsive to the cache access request to determine, based on the prediction, whether to perform an additional lookup of the tracking circuitry. A method and a non-transitory computer-readable medium to store computer-readable code for fabrication of the apparatus are also provided.Type: GrantFiled: January 26, 2023Date of Patent: February 20, 2024Assignee: Arm LimitedInventors: Alexander Alfred Hornung, Kenny Ju Min Yeoh
-
Publication number: 20240036760Abstract: An apparatus supports decoding and execution of a bulk memory instruction specifying a block size parameter. The apparatus comprises control circuitry to determine whether the block size corresponding to the block size parameter exceeds a predetermined threshold, and performs a micro-architectural control action to influence the handling of at least one bulk memory operation by memory operation processing circuitry. The micro-architectural control action varies depending on whether the block size exceeds the predetermined threshold, and further depending on the states of other components and operations within or coupled with the apparatus. The micro-architectural control action could include an alignment correction action, cache allocation control action, or processing circuitry selection action.Type: ApplicationFiled: July 28, 2022Publication date: February 1, 2024Inventors: Ian Michael CAULFIELD, ABHISHEK RAJA, Alexander Alfred HORNUNG
-
Patent number: 11526443Abstract: Requester circuitry 4 issues an access request specifying a target physical address (PA) and a target physical address space (PAS) identifier identifying a target PAS. Prior to a point of physical aliasing (PoPA), a pre-PoPA memory system component 24, 8 treats aliasing PAs from different PASs which actually correspond to the same memory system resource as if they correspond to different memory system resources. A post-PoPA memory system component 6 treats the aliasing PAs as referring to the same memory system resource. When the target PA and target PAS of a read-if-hit-pre-PoPA request hit in a pre-PoPA cache 24, a data response is returned to the requester circuitry 4. If the read-if-hit-pre-PoPA request misses in the pre-PoPA cache 24, a no-data response is returned. The read-if-hit-pre-PoPA request is safe to issue speculatively while waiting for security checks to be performed, improving performance.Type: GrantFiled: August 16, 2021Date of Patent: December 13, 2022Assignee: Arm LimitedInventors: Alexander Alfred Hornung, Andrew Brookfield Swaine
-
Patent number: 11481220Abstract: An apparatus comprises instruction fetch circuitry to retrieve instructions from storage and branch target storage to store entries comprising source and target addresses for branch instructions. A confidence value is stored with each entry and when a current address matches a source address in an entry, and the confidence value exceeds a confidence threshold, instruction fetch circuitry retrieves a predicted next instruction from a target address in the entry. Branch confidence update circuitry increases the confidence value of the entry on receipt of a confirmation of the target address and decreases the confidence value on receipt of a non-confirmation of the target address. When the confidence value meets a confidence lock threshold below the confidence threshold and non-confirmation of the target address is received, a locking mechanism with respect to the entry is triggered. A corresponding method is also provided.Type: GrantFiled: October 26, 2016Date of Patent: October 25, 2022Assignee: Arm LimitedInventors: Alexander Alfred Hornung, Adrian Viorel Popescu
-
Patent number: 11429529Abstract: An apparatus comprises processing circuitry to issue demand memory access requests to access data stored in a memory system. Stride pattern detection circuitry detects whether a sequence of demand target addresses specified by the demand memory access requests includes two or more constant stride sequences of addresses interleaved within the sequence of demand target addresses. Each constant stride sequence comprises addresses separated by intervals of a constant stride value. Prefetch control circuitry controls issuing of prefetch load requests to prefetch data from the memory system. The prefetch load requests specify prefetch target addresses predicted based on the constant stride sequences detected by the stride pattern detection circuitry.Type: GrantFiled: November 21, 2019Date of Patent: August 30, 2022Assignee: Arm LimitedInventors: Alexander Alfred Hornung, Jose Gonzalez-Gonzalez, Gregory Andrew Chadwick
-
Publication number: 20220058121Abstract: Requester circuitry 4 issues an access request specifying a target physical address (PA) and a target physical address space (PAS) identifier identifying a target PAS. Prior to a point of physical aliasing (PoPA), a pre-PoPA memory system component 24, 8 treats aliasing PAs from different PASs which actually correspond to the same memory system resource as if they correspond to different memory system resources. A post-PoPA memory system component 6 treats the aliasing PAs as referring to the same memory system resource. When the target PA and target PAS of a read-if-hit-pre-PoPA request hit in a pre-PoPA cache 24, a data response is returned to the requester circuitry 4. If the read-if-hit-pre-PoPA request misses in the pre-PoPA cache 24, a no-data response is returned. The read-if-hit-pre-PoPA request is safe to issue speculatively while waiting for security checks to be performed, improving performance.Type: ApplicationFiled: August 16, 2021Publication date: February 24, 2022Inventors: Alexander Alfred Hornung, Andrew Brookfield Swaine
-
Patent number: 11204771Abstract: Aspects of the present disclosure relate to an apparatus comprising decode circuitry to receive an instruction and identify the received instruction as a load instruction, and prediction circuitry to predict, based on a prediction scheme, a target address of the load instruction, and trigger a speculative memory access in respect of the predicted target address.Type: GrantFiled: October 24, 2019Date of Patent: December 21, 2021Assignee: Arm LimitedInventors: Alexander Alfred Hornung, Jose Gonzalez-Gonzalez
-
Publication number: 20210157730Abstract: An apparatus comprises processing circuitry to issue demand memory access requests to access data stored in a memory system. Stride pattern detection circuitry detects whether a sequence of demand target addresses specified by the demand memory access requests includes two or more constant stride sequences of addresses interleaved within the sequence of demand target addresses. Each constant stride sequence comprises addresses separated by intervals of a constant stride value. Prefetch control circuitry controls issuing of prefetch load requests to prefetch data from the memory system. The prefetch load requests specify prefetch target addresses predicted based on the constant stride sequences detected by the stride pattern detection circuitry.Type: ApplicationFiled: November 21, 2019Publication date: May 27, 2021Inventors: Alexander Alfred HORNUNG, Jose GONZALEZ-GONZALEZ, Gregory Andrew CHADWICK
-
Publication number: 20210124587Abstract: Aspects of the present disclosure relate to an apparatus comprising decode circuitry to receive an instruction and identify the received instruction as a load instruction, and prediction circuitry to predict, based on a prediction scheme, a target address of the load instruction, and trigger a speculative memory access in respect of the predicted target address.Type: ApplicationFiled: October 24, 2019Publication date: April 29, 2021Inventors: Alexander Alfred HORNUNG, Jose GONZALEZ-GONZALEZ
-
Publication number: 20210103493Abstract: A requester issues a request specifying a target address indicating an addressed location in a memory system. A completer responds to the request. Tag error checking circuitry performs a tag error checking operation when the request issued by the requester is a tag-error-checking request specifying an address tag. The tag error checking operation comprises determining whether the address tag matches an allocation tag stored in the memory system associated with a block of one or more addresses comprising the target address specified by the tag-error-checking request. The requester and the completer communicate via a memory interface having at least one data signal path to exchange read data or write data between the requester and the completer; and at least one tag signal path, provided in parallel with the at least one data signal path, to exchange address tags or allocation tags between the requester and the completer.Type: ApplicationFiled: October 7, 2019Publication date: April 8, 2021Inventors: Bruce James MATHEWSON, Phanindra Kumar MANNAVA, Michael Andrew CAMPBELL, Alexander Alfred HORNUNG, Alex James WAUGH, Klas Magnus BRUCE, Richard Roy GRISENTHWAITE
-
Patent number: 10949292Abstract: A requester issues a request specifying a target address indicating an addressed location in a memory system. A completer responds to the request. Tag error checking circuitry performs a tag error checking operation when the request issued by the requester is a tag-error-checking request specifying an address tag. The tag error checking operation comprises determining whether the address tag matches an allocation tag stored in the memory system associated with a block of one or more addresses comprising the target address specified by the tag-error-checking request. The requester and the completer communicate via a memory interface having at least one data signal path to exchange read data or write data between the requester and the completer; and at least one tag signal path, provided in parallel with the at least one data signal path, to exchange address tags or allocation tags between the requester and the completer.Type: GrantFiled: October 7, 2019Date of Patent: March 16, 2021Assignee: Arm LimitedInventors: Bruce James Mathewson, Phanindra Kumar Mannava, Michael Andrew Campbell, Alexander Alfred Hornung, Alex James Waugh, Klas Magnus Bruce, Richard Roy Grisenthwaite
-
Publication number: 20200250098Abstract: An apparatus comprises a cache memory to store data as a plurality of cache lines each having a data size and an associated physical address in a memory, access circuitry to access the data stored in the cache memory, detection circuitry to detect, for at least a set of sub-units of the cache lines stored in the cache memory, whether a number of accesses by the access circuitry to a given sub-unit exceeds a predetermined threshold, in which each sub-unit has a data size that is smaller than the data size of a cache line, prediction circuitry to generate a prediction, for a given region of a plurality of regions of physical address space, of whether data stored in that region comprises streaming data in which each of one or more portions of the given cache line is predicted to be subject to a maximum of one read operation or multiple access data in which each of the one or more portions of the given cache line is predicted to be subject to more than one read operation, the prediction circuitry being configuredType: ApplicationFiled: February 5, 2019Publication date: August 6, 2020Inventors: Lei MA, Alexander Alfred HORNUNG, Ian Michael CAULFIELD
-
Patent number: 10725923Abstract: An apparatus comprises a cache memory to store data as a plurality of cache lines each having a data size and an associated physical address in a memory, access circuitry to access the data stored in the cache memory, detection circuitry to detect, for at least a set of sub-units of the cache lines stored in the cache memory, whether a number of accesses by the access circuitry to a given sub-unit exceeds a predetermined threshold, in which each sub-unit has a data size that is smaller than the data size of a cache line, prediction circuitry to generate a prediction, for a given region of a plurality of regions of physical address space, of whether data stored in that region comprises streaming data in which each of one or more portions of the given cache line is predicted to be subject to a maximum of one read operation or multiple access data in which each of the one or more portions of the given cache line is predicted to be subject to more than one read operation, the prediction circuitry being configuredType: GrantFiled: February 5, 2019Date of Patent: July 28, 2020Assignee: Arm LimitedInventors: Lei Ma, Alexander Alfred Hornung, Ian Michael Caulfield
-
Patent number: 10564973Abstract: A processor fetches instructions from a plurality of threads. Each entry in a branch information storage (BIS) stores a virtual address ID for a branch, information about the branch, and thread ID information. The BIS is accessed using a virtual address of an instruction to be fetched for a thread to determine whether a hit exists, and if so, to obtain the branch information stored in the entry that gave rise to the hit. The virtual address is converted into a physical address, and an address translation regime is specified for each thread. When allocating an entry into the BIS, allocation circuitry determines, for a branch instruction for a current thread, whether the address translation regime is shared by plural threads. If so, the allocation circuitry identifies both the current thread and any other thread for which the address translation regime is shared.Type: GrantFiled: November 20, 2015Date of Patent: February 18, 2020Assignee: ARM LIMITEDInventors: Alexander Alfred Hornung, Ian Michael Caulfield
-
Patent number: 10503512Abstract: Apparatus for data processing and a method of data processing are provided, according to which the processing circuitry of the apparatus can access a memory system and execute data processing instructions in one context of multiple contexts which it supports. When the processing circuitry executes a barrier instruction, the resulting access ordering constraint may be limited to being enforced for accesses which have been initiated by the processing circuitry when operating in an identified context, which may for example be the context in which the barrier instruction has been executed. This provides a separation between the operation of the processing circuitry in its multiple possible contexts and in particular avoids delays in the completion of the access ordering constraint, for example relating to accesses to high latency regions of memory, from affecting the timing sensitivities of other contexts.Type: GrantFiled: November 3, 2015Date of Patent: December 10, 2019Assignee: ARM LimitedInventors: Simon John Craske, Alexander Alfred Hornung, Max John Batley, Kauser Yakub Johar
-
Publication number: 20170153894Abstract: An apparatus which produces branch predictions and a method of operating such an apparatus are provided. A branch target storage used to store entries comprising indications of branch instruction source addresses and indications of branch instruction target addresses is further used to store bias weights. A history storage stores history-based weights for the branch instruction source addresses and a history-based weight is dependent on whether a branch to a branch instruction target address from a branch instruction source address has previously been taken for at least one previous encounter with the branch instruction source address. Prediction generation circuitry receives the bias weight and the history-based weight of the branch instruction source address and generates either a taken prediction or a not-taken prediction for the branch. The reuse of the branch target storage to bias weights reduces the total storage required and the matching of entire source addresses avoids problems related to aliasing.Type: ApplicationFiled: November 27, 2015Publication date: June 1, 2017Inventors: Alexander Alfred HORNUNG, Adrian Viorel POPESCU