Patents by Inventor Michael Caulfield

Michael Caulfield has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11263133
    Abstract: Coherency control circuitry (10) supports processing of a safe-speculative-read transaction received from a requesting master device (4). The safe-speculative-read transaction is of a type requesting that target data is returned to a requesting cache (11) of the requesting master device (4) while prohibiting any change in coherency state associated with the target data in other caches (12) in response to the safe-speculative-read transaction. In response, at least when the target data is cached in a second cache associated with a second master device, at least one of the coherency control circuitry (10) and the second cache (12) is configured to return a safe-speculative-read response while maintaining the target data in the same coherency state within the second cache. This helps to mitigate against speculative side-channel attacks.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: March 1, 2022
    Assignee: Arm Limited
    Inventors: Andreas Lars Sandberg, Stephan Diestelhorst, Nikos Nikoleris, Ian Michael Caulfield, Peter Richard Greenhalgh, Frederic Claude Marie Piry, Albin Pierrick Tonnerre
  • Publication number: 20220050909
    Abstract: A data processing apparatus is provided which controls the use of data in respect of a further operation. The data processing apparatus identifies whether data is trusted or untrusted by identifying whether or not the data was determined by a speculatively executed resolve-pending operation. A permission control unit is also provided to control how the data can be used in respect of a further operation according to a security policy while the speculatively executed operation is still resolve-pending.
    Type: Application
    Filed: October 25, 2019
    Publication date: February 17, 2022
    Inventors: Alastair David REID, Albin Pierrick TONNERRE, Frederic Claude Marie PIRY, Peter Richard GREENHALGH, Ian Michael CAULFIELD, Timothy HAYES, Giacomo GABRIELLI
  • Publication number: 20210392210
    Abstract: A masked packet checksum is utilized to provide error detection and/or error correction for only discrete portions of a packet, to the exclusion of other portions, thereby avoiding retransmission if transmission errors appear only in portions excluded by the masked packet checksum. A bitmask identifies packet portions whose data is to be protected with error detection and/or error correction schemes, packet portions whose data is to be excluded from such error detection and/or error correction schemes, or combinations thereof. A bitmask can be a per-packet specification, incorporated into one or more fields of individual packets, or a single bitmask can apply equally to multiple packets, which can be delineated in numerous ways, and can be separately transmitted or derived. Bitmasks can be generated at higher layers with lower layer mechanisms deactivated, or can be generated lower layers based upon data passed down.
    Type: Application
    Filed: August 30, 2021
    Publication date: December 16, 2021
    Inventors: Adrian Michael CAULFIELD, Michael Konstantinos PAPAMICHAEL
  • Patent number: 11144286
    Abstract: A multi-threaded imperative programming language includes language constructs that map to circuit implementations. The constructs can include a condition statement that enables a thread in a hardware pipeline to wait for a specified condition to occur, identify the start and end of a portion of source code instructions that are to be executed atomically, or indicate that a read-modify-write memory operation is to be performed atomically. Source code that includes one or more constructs mapping to a circuit implementation can be compiled to generate a circuit description. The circuit description can be expressed using hardware description language (HDL), for instance. The circuit description can, in turn, be used to generate a synchronous digital circuit that includes the circuit implementation. For example, HDL might be utilized to generate an FPGA image or bitstream that can be utilized to program an FPGA that includes the circuit implementation associate with the language construct.
    Type: Grant
    Filed: January 14, 2019
    Date of Patent: October 12, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Blake D. Pelton, Adrian Michael Caulfield
  • Publication number: 20210311742
    Abstract: An apparatus and method are provided for processing instructions. The apparatus has execution circuitry for executing instructions, where each instruction requires an associated operation to be performed using one or more source operand values in order to produce a result value. Issue circuitry is used to maintain a record of pending instructions awaiting execution by the execution circuitry, and prediction circuitry is used to produce a predicted source operand value for a chosen pending instruction. Optimisation circuitry is then arranged to detect an optimisation condition for the chosen pending instruction when the predicted source operand value is such that, having regard to the associated operation for the chosen pending instruction, the result value is known without performing the associated operation.
    Type: Application
    Filed: July 17, 2019
    Publication date: October 7, 2021
    Inventors: Peter Richard GREENHALGH, Frederic Claude Marie PIRY, Ian Michael CAULFIELD, Albin Pierrick TONNERRE
  • Patent number: 11126714
    Abstract: A data processing apparatus comprises branch prediction circuitry adapted to store at least one branch prediction state entry in relation to a stream of instructions, input circuitry to receive at least one input to generate a new branch prediction state entry, wherein the at least one input comprises a plurality of bits; and coding circuitry adapted to perform an encoding operation to encode at least some of the plurality of bits based on a value associated with a current execution environment in which the stream of instructions is being executed. This guards against potential attacks which exploit the ability for branch prediction entries trained by one execution environment to be used by another execution environment as a basis for branch predictions.
    Type: Grant
    Filed: October 2, 2018
    Date of Patent: September 21, 2021
    Assignee: Arm Limited
    Inventors: Alastair David Reid, Dominic Phillip Mulligan, Milosch Meriac, Matthias Lothar Boettcher, Nathan Yong Seng Chong, Ian Michael Caulfield, Peter Richard Greenhalgh, Frederic Claude Marie Piry, Albin Pierrick Tonnerre, Thomas Christopher Grocutt, Yasuo Ishii
  • Patent number: 11113176
    Abstract: Program source code defined in a multi-threaded imperative programming language can be compiled into a circuit description for a synchronous digital circuit (“SDC”) that includes pipelines and queues. During compilation, data defining a debugging network for the SDC can be added to the circuit description. The circuit description can then be used to generate the SDC such as, for instance, on an FPGA. A CPU connected to the SDC can utilize the debugging network to query the pipelines for state information such as, for instance, data indicating that an input queue for a pipeline is empty, data indicating the state of an output queue, or data indicating if a wait condition for a pipeline has been satisfied. A profiling tool can execute on the CPU for use in debugging the SDC.
    Type: Grant
    Filed: January 14, 2019
    Date of Patent: September 7, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Blake D. Pelton, Adrian Michael Caulfield
  • Patent number: 11106437
    Abstract: A programming language and a compiler are disclosed that optimize the use of look-up tables (LUTs) on a synchronous digital circuit (SDC) such as a field programmable gate array (FPGA) that has been programmed. LUTs are optimized by merging multiple computational operations into the same LUT. A compiler parses source code into an intermediate representation (IR). Each node of the IR that represents an operator (e.g. ‘&’, ‘+’) is mapped to a LUT that implements that operator. The compiler iteratively traverses the IR, merging adjacent LUTs into a LUT that performs both operations and performing input removal optimizations. Additional operators may be merged into a merged LUT until all the LUT's inputs are assigned. Pipeline stages are then generated based on merged LUTs, and an SDC is programmed based on the pipeline and the merged LUT.
    Type: Grant
    Filed: January 14, 2019
    Date of Patent: August 31, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Blake D. Pelton, Adrian Michael Caulfield
  • Patent number: 11108894
    Abstract: A masked packet checksum is utilized to provide error detection and/or error correction for only discrete portions of a packet, to the exclusion of other portions, thereby avoiding retransmission if transmission errors appear only in portions excluded by the masked packet checksum. A bitmask identifies packet portions whose data is to be protected with error detection and/or error correction schemes, packet portions whose data is to be excluded from such error detection and/or error correction schemes, or combinations thereof. A bitmask can be a per-packet specification, incorporated into one or more fields of individual packets, or a single bitmask can apply equally to multiple packets, which can be delineated in numerous ways, and can be separately transmitted or derived. Bitmasks can be generated at higher layers with lower layer mechanisms deactivated, or can be generated lower layers based upon data passed down.
    Type: Grant
    Filed: August 9, 2019
    Date of Patent: August 31, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adrian Michael Caulfield, Michael Konstantinos Papamichael
  • Patent number: 11093682
    Abstract: A multi-threaded programming language and compiler generates synchronous digital circuits that maintain thread execution order by generating pipelines with code paths that have the same number of stages. The compiler balances related code paths within a pipeline by adding additional stages to a code path that has fewer stages. Programming constructs that, by design, allow thread execution to be re-ordered, may be placed in a reorder block construct that releases threads in the order they entered the programming construct. First-in-first-out (FIFO) queues pass local variables between pipelines. Local variables are popped from FIFOs in the order they were pushed, preserving thread execution order across pipelines.
    Type: Grant
    Filed: January 14, 2019
    Date of Patent: August 17, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Blake D. Pelton, Adrian Michael Caulfield
  • Patent number: 11074080
    Abstract: A processing pipeline may have first and second execution circuits having different performance or energy consumption characteristics. Instruction supply circuitry may support different instruction supply schemes with different energy consumption or performance characteristics. This can allow a further trade-off between performance and energy efficiency. Architectural state storage can be shared between the execute units to reduce the overhead of switching between the units. In a parallel execution mode, groups of instructions can be executed on both execute units in parallel.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: July 27, 2021
    Assignee: ARM Limited
    Inventors: Peter Richard Greenhalgh, Simon John Craske, Ian Michael Caulfield, Max John Batley, Allan John Skillman, Antony John Penton
  • Publication number: 20210224071
    Abstract: An apparatus 2 has a processing pipeline 4 supporting at least a first processing mode and a second processing mode with different energy consumption or performance characteristics. A storage structure 22, 30, 36, 50, 40, 64, 44 is accessible in both the first and second processing modes. When the second processing mode is selected, control circuitry 70 triggers a subset 102 of the entries of the storage structure to be placed in a power saving state.
    Type: Application
    Filed: April 7, 2021
    Publication date: July 22, 2021
    Inventors: Max John Batley, Simon John Craske, Ian Michael Caulfield, Peter Richard Greenhalgh, Allan John Skillman, Antony John Penton
  • Publication number: 20210117621
    Abstract: Systems and methods for dynamically generating object models corresponding to regulations. According to certain aspects, a server computer may access a regulation and automatically generate a summary of the regulation based on a specific set of sentences. The server computer may additionally determine a set of topics and named-entity attributes for text within a regulation object model, as well as a probability that a topic or attribute is applicable to the regulation. Further, the server computer may generate and enrich object models according to the various analyses and avail the enriched object models for review by entities and users of regulatory compliance services.
    Type: Application
    Filed: October 16, 2020
    Publication date: April 22, 2021
    Inventors: Spencer Sharpe, Annie Ibrahim Rana, Valeriy Liberman, Michael Arnold, Kyle Michael Caulfield, James Cogley, Lisa Epstein, Tricia Sheehan, Rashid Mehdiyev, Saurav Acharya
  • Patent number: 10958717
    Abstract: A server system is provided that includes a plurality of servers, each server including at least one hardware acceleration device and at least one processor communicatively coupled to the hardware acceleration device by an internal data bus and executing a host server instance, the host server instances of the plurality of servers collectively providing a software plane, and the hardware acceleration devices of the plurality of servers collectively providing a hardware acceleration plane that implements a plurality of hardware accelerated services, wherein each hardware acceleration device maintains in memory a data structure that contains load data indicating a load of each of a plurality of target hardware acceleration devices, and wherein a requesting hardware acceleration device routes the request to a target hardware acceleration device that is indicated by the load data in the data structure to have a lower load than other of the target hardware acceleration devices.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: March 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adrian Michael Caulfield, Eric S. Chung, Michael Konstantinos Papamichael, Douglas C. Burger, Shlomi Alkalay
  • Publication number: 20210042227
    Abstract: Coherency control circuitry (10) supports processing of a safe-speculative-read transaction received from a requesting master device (4). The safe-speculative-read transaction is of a type requesting that target data is returned to a requesting cache (11) of the requesting master device (4) while prohibiting any change in coherency state associated with the target data in other caches (12) in response to the safe-speculative-read transaction. In response, at least when the target data is cached in a second cache associated with a second master device, at least one of the coherency control circuitry (10) and the second cache (12) is configured to return a safe-speculative-read response while maintaining the target data in the same coherency state within the second cache. This helps to mitigate against speculative side-channel attacks.
    Type: Application
    Filed: March 12, 2019
    Publication date: February 11, 2021
    Inventors: Andreas Lars SANDBERG, Stephan DIESTELHORST, Nikos NIKOLERIS, Ian Michael CAULFIELD, Peter Richard GREENHALGH, Frederic Claude Marie PIRY, Albin Pierrick TONNERRE
  • Publication number: 20210044679
    Abstract: A masked packet checksum is utilized to provide error detection and/or error correction for only discrete portions of a packet, to the exclusion of other portions, thereby avoiding retransmission if transmission errors appear only in portions excluded by the masked packet checksum. A bitmask identifies packet portions whose data is to be protected with error detection and/or error correction schemes, packet portions whose data is to be excluded from such error detection and/or error correction schemes, or combinations thereof. A bitmask can be a per-packet specification, incorporated into one or more fields of individual packets, or a single bitmask can apply equally to multiple packets, which can be delineated in numerous ways, and can be separately transmitted or derived. Bitmasks can be generated at higher layers with lower layer mechanisms deactivated, or can be generated lower layers based upon data passed down.
    Type: Application
    Filed: August 9, 2019
    Publication date: February 11, 2021
    Inventors: Adrian Michael CAULFIELD, Michael Konstantinos PAPAMICHAEL
  • Publication number: 20210026635
    Abstract: An apparatus and method are provided for controlling allocation of instructions into an instruction cache storage. The apparatus comprises processing circuitry to execute instructions, fetch circuitry to fetch instructions from memory for execution by the processing circuitry, and an instruction cache storage to store instructions fetched from the memory by the fetch circuitry. Cache control circuitry is responsive to the fetch circuitry fetching a target instruction from a memory address determined as a target address of an instruction flow changing instruction, at least when the memory address is within a specific address range, to prevent allocation of the fetched target instruction into the instruction cache storage unless the fetched target instruction is at least one specific type of instruction. It has been found that such an approach can inhibit the performance of speculation-based caching timing side-channel attacks.
    Type: Application
    Filed: March 20, 2019
    Publication date: January 28, 2021
    Inventors: Frederic Claude Marie PIRY, Peter Richard GREENHALGH, Ian Michael CAULFIELD, Albin Pierrick TONNERRE
  • Publication number: 20210026641
    Abstract: An apparatus and method of operating a data processing apparatus are disclosed. The apparatus comprises data processing circuitry to perform data processing operations in response to a sequence of instructions, wherein the data processing circuitry is capable of performing speculative execution of at least some of the sequence of instructions. A cache structure comprising entries stores temporary copies of data items which are subjected to the data processing operations and speculative execution tracking circuitry monitors correctness of the speculative execution and responsive to indication of incorrect speculative execution to cause entries in the cache structure allocated by the incorrect speculative execution to be evicted from the cache structure.
    Type: Application
    Filed: March 21, 2019
    Publication date: January 28, 2021
    Inventors: Ian Michael CAULFIELD, Peter Richard GREENHALGH, Frederic Claude Marie PIRY, Albin Pierrick TONNERRE
  • Publication number: 20210019148
    Abstract: Examples of the present disclosure relate to an apparatus comprising execution circuitry to execute instructions defining data processing operations on data items. The apparatus comprises cache storage to store temporary copies of the data items. The apparatus comprises prefetching circuitry to a) predict that a data item will be subject to the data processing operations by the execution circuitry by determining that the data item is consistent with an extrapolation of previous data item retrieval by the execution circuitry, and identifying that at least one control flow element of the instructions indicates that the data item will be subject to the data processing operations by the execution circuitry; and b) prefetch the data item into the cache storage.
    Type: Application
    Filed: March 14, 2019
    Publication date: January 21, 2021
    Inventors: Ian Michael CAULFIELD, Peter Richard GREENHALGH, Frederic Claude Marie PIRY, Albin Pierrick TONNERRE
  • Publication number: 20200410110
    Abstract: An apparatus comprises processing circuitry 14 to perform data processing in response to instructions, the processing circuitry supporting speculative processing of read operations for reading data from a memory system 20, 22; and control circuitry 12, 14, 20 to identify whether a sequence of instructions to be processed by the processing circuitry includes a speculative side-channel hint instruction indicative of whether there is a risk of information leakage if at least one subsequent read operation is processed speculatively, and to determine whether to trigger a speculative side-channel mitigation measure depending on whether the instructions include the speculative side-channel hint instruction. This can help to reduce the performance impact of measures taken to protect against speculative side-channel attacks.
    Type: Application
    Filed: March 12, 2019
    Publication date: December 31, 2020
    Inventors: Peter Richard GREENHALGH, Frederic Claude Marie PIRY, Ian Michael CAULFIELD, Albin Pierrick TONNERRE