Patents by Inventor Michael Caulfield
Michael Caulfield has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12182261Abstract: A data processing apparatus is provided which controls the use of data in respect of a further operation. The data processing apparatus identifies whether data is trusted or untrusted by identifying whether or not the data was determined by a speculatively executed resolve-pending operation. A permission control unit is also provided to control how the data can be used in respect of a further operation according to a security policy while the speculatively executed operation is still resolve-pending.Type: GrantFiled: October 25, 2019Date of Patent: December 31, 2024Assignee: Arm LimitedInventors: Alastair David Reid, Albin Pierrick Tonnerre, Frederic Claude Marie Piry, Peter Richard Greenhalgh, Ian Michael Caulfield, Timothy Hayes, Giacomo Gabrielli
-
Publication number: 20240386094Abstract: An apparatus has processing circuitry to execute instructions and address prediction storage circuitry to store address prediction information for use in predicting upcoming instructions to be executed by the processing circuitry. The processing circuitry is responsive to an instruction to generate a pointer signature for a pointer to generate the pointer signature for the pointer based on an address of the pointer and a cryptographic key. The address prediction storage circuitry is also configured to store address prediction information for the pointer, the address prediction information including the pointer. The processing circuitry is responsive to an instruction to authenticate a given pointer to obtain, based on the address prediction information for the given pointer, a predicted pointer signature; compare the predicted pointer signature with a pointer signature identified by the instruction to authenticate; and responsive to the comparing detecting a match, determine that the given pointer is valid.Type: ApplicationFiled: July 7, 2022Publication date: November 21, 2024Applicant: Arm LimitedInventors: Alexander Alfred Hornung, Ian Michael Caulfield
-
Patent number: 12050805Abstract: An apparatus supports decoding and execution of a bulk memory instruction specifying a block size parameter. The apparatus comprises control circuitry to determine whether the block size corresponding to the block size parameter exceeds a predetermined threshold, and performs a micro-architectural control action to influence the handling of at least one bulk memory operation by memory operation processing circuitry. The micro-architectural control action varies depending on whether the block size exceeds the predetermined threshold, and further depending on the states of other components and operations within or coupled with the apparatus. The micro-architectural control action could include an alignment correction action, cache allocation control action, or processing circuitry selection action.Type: GrantFiled: July 28, 2022Date of Patent: July 30, 2024Assignee: Arm LimitedInventors: Ian Michael Caulfield, Abhishek Raja, Alexander Alfred Hornung
-
Patent number: 12047479Abstract: A masked packet checksum is utilized to provide error detection and/or error correction for only discrete portions of a packet, to the exclusion of other portions, thereby avoiding retransmission if transmission errors appear only in portions excluded by the masked packet checksum. A bitmask identifies packet portions whose data is to be protected with error detection and/or error correction schemes, packet portions whose data is to be excluded from such error detection and/or error correction schemes, or combinations thereof. A bitmask can be a per-packet specification, incorporated into one or more fields of individual packets, or a single bitmask can apply equally to multiple packets, which can be delineated in numerous ways, and can be separately transmitted or derived. Bitmasks can be generated at higher layers with lower layer mechanisms deactivated, or can be generated lower layers based upon data passed down.Type: GrantFiled: August 30, 2021Date of Patent: July 23, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Adrian Michael Caulfield, Michael Konstantinos Papamichael
-
Patent number: 11941082Abstract: Systems and methods for classifying product feedback by an electronic device are described. According to certain aspects, an electronic device may receive consumer feedback entries associated with various products, where each entry may include an initial classification. The electronic device may analyze each entry using a machine learning model to determine a subsequent classification for the entry. When there is a mismatch between classifications, the electronic device may present information associated with the entry for review by a user, where the user may specify a final classification for the entry, and the electronic device may update the machine learning model for use in subsequent analyses.Type: GrantFiled: April 12, 2019Date of Patent: March 26, 2024Assignee: UL LLCInventors: Christian Dorn Anschuetz, Surekha Durvasula, Spencer Sharpe, Kyle Michael Caulfield
-
Publication number: 20240036760Abstract: An apparatus supports decoding and execution of a bulk memory instruction specifying a block size parameter. The apparatus comprises control circuitry to determine whether the block size corresponding to the block size parameter exceeds a predetermined threshold, and performs a micro-architectural control action to influence the handling of at least one bulk memory operation by memory operation processing circuitry. The micro-architectural control action varies depending on whether the block size exceeds the predetermined threshold, and further depending on the states of other components and operations within or coupled with the apparatus. The micro-architectural control action could include an alignment correction action, cache allocation control action, or processing circuitry selection action.Type: ApplicationFiled: July 28, 2022Publication date: February 1, 2024Inventors: Ian Michael CAULFIELD, ABHISHEK RAJA, Alexander Alfred HORNUNG
-
Publication number: 20240020480Abstract: Systems and methods for dynamically generating object models corresponding to regulations. According to certain aspects, a server computer may access a regulation and automatically generate a summary of the regulation based on a specific set of sentences. The server computer may additionally determine a set of topics and named-entity attributes for text within a regulation object model, as well as a probability that a topic or attribute is applicable to the regulation. Further, the server computer may generate and enrich object models according to the various analyses and avail the enriched object models for review by entities and users of regulatory compliance services.Type: ApplicationFiled: September 27, 2023Publication date: January 18, 2024Inventors: Spencer Sharpe, Annie Ibrahim Rana, Valeriy Liberman, Michael Arnold, Kyle Michael Caulfield, James Cogley, Lisa Epstein, Tricia Sheehan, Rashid Mehdiyev, Saurav Acharya
-
Publication number: 20240004611Abstract: Processing circuitry performs a processing operation to generate a two's complement result value representing a positive or negative number in two's complement representation. Normalization-and-rounding circuitry converts the two's complement result value to a normalized-and-rounded floating-point result value represented using sign-magnitude representation. The normalization-and-rounding circuitry comprises incrementing circuitry to perform an increment addition (e.g. a rounding increment or a conversion increment) to generate a fraction of the normalized-and-rounded floating-point result value. For an operation where the increment addition is required to be performed, tininess detection circuitry detects the after-rounding tininess status based on a still-to-be-incremented version of the normalized-and-rounded floating-point result value prior to the increment addition by the increment circuitry.Type: ApplicationFiled: July 1, 2022Publication date: January 4, 2024Inventors: Michael Alexander KENNEDY, Marco MONTAGNA, Karel Hubertus Gerardus WALTERS, Ian Michael CAULFIELD
-
Publication number: 20230379254Abstract: Techniques and algorithms for monitoring network congestion and for triggering a flow to follow a new path through a network. The network is monitored, and network feedback data is acquired, where that data indicates whether the network is congested. If the network is congested, a feedback-driven algorithm can trigger a flow to follow a new path. By triggering the flow to follow the new path, congestion in the network is reduced. To identify congestion, the feedback data is analyzed to determine whether flows are colliding. The feedback-driven algorithm determines that a network remapping event is to occur in an attempt to alleviate the congestion. A flow is then selected to be remapped to alleviate the congestion.Type: ApplicationFiled: May 18, 2022Publication date: November 23, 2023Inventors: Michael Konstantinos PAPAMICHAEL, Mohammad Saifee DOHADWALA, Adrian Michael CAULFIELD, Prashant RANJAN
-
Patent number: 11803388Abstract: An apparatus and method are provided for processing instructions. The apparatus has execution circuitry for executing instructions, where each instruction requires an associated operation to be performed using one or more source operand values in order to produce a result value. Issue circuitry is used to maintain a record of pending instructions awaiting execution by the execution circuitry, and prediction circuitry is used to produce a predicted source operand value for a chosen pending instruction. Optimisation circuitry is then arranged to detect an optimisation condition for the chosen pending instruction when the predicted source operand value is such that, having regard to the associated operation for the chosen pending instruction, the result value is known without performing the associated operation.Type: GrantFiled: July 17, 2019Date of Patent: October 31, 2023Assignee: Arm LimitedInventors: Peter Richard Greenhalgh, Frederic Claude Marie Piry, Ian Michael Caulfield, Albin Pierrick Tonnerre
-
Patent number: 11783132Abstract: Systems and methods for dynamically generating object models corresponding to regulations. According to certain aspects, a server computer may access a regulation and automatically generate a summary of the regulation based on a specific set of sentences. The server computer may additionally determine a set of topics and named-entity attributes for text within a regulation object model, as well as a probability that a topic or attribute is applicable to the regulation. Further, the server computer may generate and enrich object models according to the various analyses and avail the enriched object models for review by entities and users of regulatory compliance services.Type: GrantFiled: October 16, 2020Date of Patent: October 10, 2023Assignee: UL LLCInventors: Spencer Sharpe, Annie Ibrahim Rana, Valeriy Liberman, Michael Arnold, Kyle Michael Caulfield, James Cogley, Lisa Epstein, Tricia Sheehan, Rashid Mehdiyev, Saurav Acharya
-
Patent number: 11775269Abstract: A multi-threaded imperative programming language includes a language construct defining a function call. A circuit implementation for the construct includes a first pipeline, a second pipeline, and a third pipeline. The first hardware pipeline outputs variables to a first queue and outputs parameters for the function to a second queue. The second hardware pipeline obtains the function parameters from the second queue, performs the function, and stores the results of the function in a third queue. The third hardware pipeline retrieves the results generated by the second pipeline from the second queue and retrieves the variables from the first queue. The third hardware pipeline performs hardware operations specified by the source code using the variables and the results of the function. A single instance of the circuit implementation can be utilized to implement calls to the same function made from multiple locations within source code.Type: GrantFiled: February 3, 2022Date of Patent: October 3, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Blake D. Pelton, Adrian Michael Caulfield
-
Patent number: 11579879Abstract: An apparatus 2 has a processing pipeline 4 supporting at least a first processing mode and a second processing mode with different energy consumption or performance characteristics. A storage structure 22, 30, 36, 50, 40, 64, 44 is accessible in both the first and second processing modes. When the second processing mode is selected, control circuitry 70 triggers a subset 102 of the entries of the storage structure to be placed in a power saving state.Type: GrantFiled: April 7, 2021Date of Patent: February 14, 2023Assignee: ARM LIMITEDInventors: Max John Batley, Simon John Craske, Ian Michael Caulfield, Peter Richard Greenhalgh, Allan John Skillman, Antony John Penton
-
Patent number: 11526615Abstract: An apparatus comprises processing circuitry 14 to perform data processing in response to instructions, the processing circuitry supporting speculative processing of read operations for reading data from a memory system 20, 22; and control circuitry 12, 14, 20 to identify whether a sequence of instructions to be processed by the processing circuitry includes a speculative side-channel hint instruction indicative of whether there is a risk of information leakage if at least one subsequent read operation is processed speculatively, and to determine whether to trigger a speculative side-channel mitigation measure depending on whether the instructions include the speculative side-channel hint instruction. This can help to reduce the performance impact of measures taken to protect against speculative side-channel attacks.Type: GrantFiled: March 12, 2019Date of Patent: December 13, 2022Assignee: Arm LimitedInventors: Peter Richard Greenhalgh, Frederic Claude Marie Piry, Ian Michael Caulfield, Albin Pierrick Tonnerre
-
Publication number: 20220291923Abstract: Techniques for performing matrix multiplication in a data processing apparatus are disclosed, comprising apparatuses, matrix multiply instructions, methods of operating the apparatuses, and virtual machine implementations. Registers, each register for storing at least four data elements, are referenced by a matrix multiply instruction and in response to the matrix multiply instruction a matrix multiply operation is carried out. First and second matrices of data elements are extracted from first and second source registers, and plural dot product operations, acting on respective rows of the first matrix and respective columns of the second matrix are performed to generate a square matrix of result data elements, which is applied to a destination register. A higher computation density for a given number of register operands is achieved with respect to vector-by-element techniques.Type: ApplicationFiled: February 23, 2022Publication date: September 15, 2022Inventors: David Hennah MANSELL, Rune HOLM, Ian Michael CAULFIELD, Jelena MILANOVIC
-
Patent number: 11429393Abstract: An apparatus for data processing and a method of data processing are provided. Data processing operations are performed in response to instructions which reference architectural registers using physical registers to store data values when performing the data processing operations. Mappings between the architectural registers and the physical registers are stored, and when a data hazard condition is identified with respect to out-of-order program execution of an instruction, an architectural register specified in the instruction is remapped to an available physical register. A reorder buffer stores an entry for each destination architectural register specified by the instruction, entries being stored in program order, and an entry specifies a destination architectural register and an original physical register to which the destination architectural register was mapped before the architectural register remapped to an available physical register.Type: GrantFiled: November 11, 2015Date of Patent: August 30, 2022Assignee: ARM LIMITEDInventors: Vladimir Vasekin, Ian Michael Caulfield, Chiloda Ashan Senarath Pathirane
-
Patent number: 11397584Abstract: An apparatus and method of operating a data processing apparatus are disclosed. The apparatus comprises data processing circuitry to perform data processing operations in response to a sequence of instructions, wherein the data processing circuitry is capable of performing speculative execution of at least some of the sequence of instructions. A cache structure comprising entries stores temporary copies of data items which are subjected to the data processing operations and speculative execution tracking circuitry monitors correctness of the speculative execution and responsive to indication of incorrect speculative execution to cause entries in the cache structure allocated by the incorrect speculative execution to be evicted from the cache structure.Type: GrantFiled: March 21, 2019Date of Patent: July 26, 2022Assignee: Arm LimitedInventors: Ian Michael Caulfield, Peter Richard Greenhalgh, Frederic Claude Marie Piry, Albin Pierrick Tonnerre
-
Patent number: 11392383Abstract: Examples of the present disclosure relate to an apparatus comprising execution circuitry to execute instructions defining data processing operations on data items. The apparatus comprises cache storage to store temporary copies of the data items. The apparatus comprises prefetching circuitry to a) predict that a data item will be subject to the data processing operations by the execution circuitry by determining that the data item is consistent with an extrapolation of previous data item retrieval by the execution circuitry, and identifying that at least one control flow element of the instructions indicates that the data item will be subject to the data processing operations by the execution circuitry; and b) prefetch the data item into the cache storage.Type: GrantFiled: March 14, 2019Date of Patent: July 19, 2022Assignee: Arm LimitedInventors: Ian Michael Caulfield, Peter Richard Greenhalgh, Frederic Claude Marie Piry, Albin Pierrick Tonnerre
-
Patent number: 11340901Abstract: An apparatus and method are provided for controlling allocation of instructions into an instruction cache storage. The apparatus comprises processing circuitry to execute instructions, fetch circuitry to fetch instructions from memory for execution by the processing circuitry, and an instruction cache storage to store instructions fetched from the memory by the fetch circuitry. Cache control circuitry is responsive to the fetch circuitry fetching a target instruction from a memory address determined as a target address of an instruction flow changing instruction, at least when the memory address is within a specific address range, to prevent allocation of the fetched target instruction into the instruction cache storage unless the fetched target instruction is at least one specific type of instruction. It has been found that such an approach can inhibit the performance of speculation-based caching timing side-channel attacks.Type: GrantFiled: March 20, 2019Date of Patent: May 24, 2022Assignee: Arm LimitedInventors: Frederic Claude Marie Piry, Peter Richard Greenhalgh, Ian Michael Caulfield, Albin Pierrick Tonnerre
-
Publication number: 20220156050Abstract: A multi-threaded imperative programming language includes a language construct defining a function call. A circuit implementation for the construct includes a first pipeline, a second pipeline, and a third pipeline. The first hardware pipeline outputs variables to a first queue and outputs parameters for the function to a second queue. The second hardware pipeline obtains the function parameters from the second queue, performs the function, and stores the results of the function in a third queue. The third hardware pipeline retrieves the results generated by the second pipeline from the second queue and retrieves the variables from the first queue. The third hardware pipeline performs hardware operations specified by the source code using the variables and the results of the function. A single instance of the circuit implementation can be utilized to implement calls to the same function made from multiple locations within source code.Type: ApplicationFiled: February 3, 2022Publication date: May 19, 2022Inventors: Blake D. PELTON, Adrian Michael CAULFIELD