Patents by Inventor Michael Caulfield

Michael Caulfield has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210044679
    Abstract: A masked packet checksum is utilized to provide error detection and/or error correction for only discrete portions of a packet, to the exclusion of other portions, thereby avoiding retransmission if transmission errors appear only in portions excluded by the masked packet checksum. A bitmask identifies packet portions whose data is to be protected with error detection and/or error correction schemes, packet portions whose data is to be excluded from such error detection and/or error correction schemes, or combinations thereof. A bitmask can be a per-packet specification, incorporated into one or more fields of individual packets, or a single bitmask can apply equally to multiple packets, which can be delineated in numerous ways, and can be separately transmitted or derived. Bitmasks can be generated at higher layers with lower layer mechanisms deactivated, or can be generated lower layers based upon data passed down.
    Type: Application
    Filed: August 9, 2019
    Publication date: February 11, 2021
    Inventors: Adrian Michael CAULFIELD, Michael Konstantinos PAPAMICHAEL
  • Publication number: 20210026641
    Abstract: An apparatus and method of operating a data processing apparatus are disclosed. The apparatus comprises data processing circuitry to perform data processing operations in response to a sequence of instructions, wherein the data processing circuitry is capable of performing speculative execution of at least some of the sequence of instructions. A cache structure comprising entries stores temporary copies of data items which are subjected to the data processing operations and speculative execution tracking circuitry monitors correctness of the speculative execution and responsive to indication of incorrect speculative execution to cause entries in the cache structure allocated by the incorrect speculative execution to be evicted from the cache structure.
    Type: Application
    Filed: March 21, 2019
    Publication date: January 28, 2021
    Inventors: Ian Michael CAULFIELD, Peter Richard GREENHALGH, Frederic Claude Marie PIRY, Albin Pierrick TONNERRE
  • Publication number: 20210026635
    Abstract: An apparatus and method are provided for controlling allocation of instructions into an instruction cache storage. The apparatus comprises processing circuitry to execute instructions, fetch circuitry to fetch instructions from memory for execution by the processing circuitry, and an instruction cache storage to store instructions fetched from the memory by the fetch circuitry. Cache control circuitry is responsive to the fetch circuitry fetching a target instruction from a memory address determined as a target address of an instruction flow changing instruction, at least when the memory address is within a specific address range, to prevent allocation of the fetched target instruction into the instruction cache storage unless the fetched target instruction is at least one specific type of instruction. It has been found that such an approach can inhibit the performance of speculation-based caching timing side-channel attacks.
    Type: Application
    Filed: March 20, 2019
    Publication date: January 28, 2021
    Inventors: Frederic Claude Marie PIRY, Peter Richard GREENHALGH, Ian Michael CAULFIELD, Albin Pierrick TONNERRE
  • Publication number: 20210019148
    Abstract: Examples of the present disclosure relate to an apparatus comprising execution circuitry to execute instructions defining data processing operations on data items. The apparatus comprises cache storage to store temporary copies of the data items. The apparatus comprises prefetching circuitry to a) predict that a data item will be subject to the data processing operations by the execution circuitry by determining that the data item is consistent with an extrapolation of previous data item retrieval by the execution circuitry, and identifying that at least one control flow element of the instructions indicates that the data item will be subject to the data processing operations by the execution circuitry; and b) prefetch the data item into the cache storage.
    Type: Application
    Filed: March 14, 2019
    Publication date: January 21, 2021
    Inventors: Ian Michael CAULFIELD, Peter Richard GREENHALGH, Frederic Claude Marie PIRY, Albin Pierrick TONNERRE
  • Publication number: 20200410088
    Abstract: An apparatus (2) has processing circuitry to process micro-operations, the processing circuitry supporting speculative processing of read micro-operations for reading data from a memory system. A cache (6, 8) is provided to cache the micro-operations or instructions decoded to generate the micro-operations. Profiling circuitry (40) annotates at least one cached micro-operation or instruction with annotation information depending on analysis of whether a read micro-operation satisfies a speculative side-channel condition indicative of a risk of information leakage if the read micro-operation is processed speculatively. The processing circuitry (12, 14) determines whether to trigger a speculative side-channel mitigation measure depending on the annotation information stored in the cache (6, 8).
    Type: Application
    Filed: March 12, 2019
    Publication date: December 31, 2020
    Inventors: Peter Richard GREENHALGH, Frederic Claude Marie PIRY, Ian Michael CAULFIELD, Albin Pierrick TONNERRE
  • Publication number: 20200410110
    Abstract: An apparatus comprises processing circuitry 14 to perform data processing in response to instructions, the processing circuitry supporting speculative processing of read operations for reading data from a memory system 20, 22; and control circuitry 12, 14, 20 to identify whether a sequence of instructions to be processed by the processing circuitry includes a speculative side-channel hint instruction indicative of whether there is a risk of information leakage if at least one subsequent read operation is processed speculatively, and to determine whether to trigger a speculative side-channel mitigation measure depending on whether the instructions include the speculative side-channel hint instruction. This can help to reduce the performance impact of measures taken to protect against speculative side-channel attacks.
    Type: Application
    Filed: March 12, 2019
    Publication date: December 31, 2020
    Inventors: Peter Richard GREENHALGH, Frederic Claude Marie PIRY, Ian Michael CAULFIELD, Albin Pierrick TONNERRE
  • Patent number: 10846092
    Abstract: Processing circuitry includes execute circuitry for executing micro-operations in response to instructions fetched from a data store. Control circuitry is provided to determine, based on availability of at least one processing resource, how many micro-operations are to be executed by the execute circuitry in response to a given set of one or more instructions fetched from the data store.
    Type: Grant
    Filed: May 12, 2016
    Date of Patent: November 24, 2020
    Assignee: ARM Limited
    Inventor: Ian Michael Caulfield
  • Patent number: 10810343
    Abstract: A language disclosed herein includes a loop construct that maps to a circuit implementation. The circuit implementation may be used to design or program a synchronous digital circuit. The circuit implementation includes a hardware pipeline that implements a body of a loop and a condition associated with the loop. The circuit implementation also includes the hardware first-in-first-out (FIFO) queues that marshal threads (i.e. collections of local variables) into, around, and out of the hardware pipeline. A pipeline policy circuit limits a number of threads allowed within the hardware pipeline to a capacity of the hardware FIFO queues.
    Type: Grant
    Filed: January 14, 2019
    Date of Patent: October 20, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Blake D. Pelton, Adrian Michael Caulfield
  • Patent number: 10812415
    Abstract: Active intelligent message filtering can be utilized to provide error resiliency, thereby allowing messages to be received without traditional error detection, and, in turn, avoiding the inefficiency of retransmission of network communications discarded due to network transmission errors detected by such traditional error detection mechanisms. Network transmission errors can result in the receiving application receiving messages that appear to comprise values that differ from the values originally transmitted by the transmitting application. Based on the inaccuracy tolerance applicable to the transmitting and receiving applications, rules can be applied to actively intelligently filter the received messages to replace the received values with the replacement values according to the rules. In such a manner, the receiving application can continue to receive usable data from the transmitting application without any error detection at lower network communication levels.
    Type: Grant
    Filed: August 13, 2019
    Date of Patent: October 20, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adrian Michael Caulfield, Michael Konstantinos Papamichael
  • Publication number: 20200327372
    Abstract: Systems and methods for classifying product feedback by an electronic device are described. According to certain aspects, an electronic device may receive consumer feedback entries associated with various products, where each entry may include an initial classification. The electronic device may analyze each entry using a machine learning model to determine a subsequent classification for the entry. When there is a mismatch between classifications, the electronic device may present information associated with the entry for review by a user, where the user may specify a final classification for the entry, and the electronic device may update the machine learning model for use in subsequent analyses.
    Type: Application
    Filed: April 12, 2019
    Publication date: October 15, 2020
    Inventors: Christian Dorn Anschuetz, Surekha Durvasula, Spencer Sharpe, Kyle Michael Caulfield
  • Publication number: 20200257531
    Abstract: A processing pipeline may have first and second execution circuits having different performance or energy consumption characteristics. Instruction supply circuitry may support different instruction supply schemes with different energy consumption or performance characteristics. This can allow a further trade-off between performance and energy efficiency. Architectural state storage can be shared between the execute units to reduce the overhead of switching between the units. In a parallel execution mode, groups of instructions can be executed on both execute units in parallel.
    Type: Application
    Filed: May 1, 2020
    Publication date: August 13, 2020
    Inventors: Peter Richard GREENHALGH, Simon John CRASKE, Ian Michael CAULFIELD, Max John BATLEY, Allan John SKILLMAN, Antony John PENTON
  • Publication number: 20200250098
    Abstract: An apparatus comprises a cache memory to store data as a plurality of cache lines each having a data size and an associated physical address in a memory, access circuitry to access the data stored in the cache memory, detection circuitry to detect, for at least a set of sub-units of the cache lines stored in the cache memory, whether a number of accesses by the access circuitry to a given sub-unit exceeds a predetermined threshold, in which each sub-unit has a data size that is smaller than the data size of a cache line, prediction circuitry to generate a prediction, for a given region of a plurality of regions of physical address space, of whether data stored in that region comprises streaming data in which each of one or more portions of the given cache line is predicted to be subject to a maximum of one read operation or multiple access data in which each of the one or more portions of the given cache line is predicted to be subject to more than one read operation, the prediction circuitry being configured
    Type: Application
    Filed: February 5, 2019
    Publication date: August 6, 2020
    Inventors: Lei MA, Alexander Alfred HORNUNG, Ian Michael CAULFIELD
  • Patent number: 10725923
    Abstract: An apparatus comprises a cache memory to store data as a plurality of cache lines each having a data size and an associated physical address in a memory, access circuitry to access the data stored in the cache memory, detection circuitry to detect, for at least a set of sub-units of the cache lines stored in the cache memory, whether a number of accesses by the access circuitry to a given sub-unit exceeds a predetermined threshold, in which each sub-unit has a data size that is smaller than the data size of a cache line, prediction circuitry to generate a prediction, for a given region of a plurality of regions of physical address space, of whether data stored in that region comprises streaming data in which each of one or more portions of the given cache line is predicted to be subject to a maximum of one read operation or multiple access data in which each of the one or more portions of the given cache line is predicted to be subject to more than one read operation, the prediction circuitry being configured
    Type: Grant
    Filed: February 5, 2019
    Date of Patent: July 28, 2020
    Assignee: Arm Limited
    Inventors: Lei Ma, Alexander Alfred Hornung, Ian Michael Caulfield
  • Publication number: 20200226228
    Abstract: A multi-threaded programming language and compiler generates synchronous digital circuits that maintain thread execution order by generating pipelines with code paths that have the same number of stages. The compiler balances related code paths within a pipeline by adding additional stages to a code path that has fewer stages. Programming constructs that, by design, allow thread execution to be re-ordered, may be placed in a reorder block construct that releases threads in the order they entered the programming construct. First-in-first-out (FIFO) queues pass local variables between pipelines. Local variables are popped from FIFOs in the order they were pushed, preserving thread execution order across pipelines.
    Type: Application
    Filed: January 14, 2019
    Publication date: July 16, 2020
    Inventors: Blake D. PELTON, Adrian Michael CAULFIELD
  • Publication number: 20200226051
    Abstract: Program source code defined in a multi-threaded imperative programming language can be compiled into a circuit description for a synchronous digital circuit (“SDC”) that includes pipelines and queues. During compilation, data defining a debugging network for the SDC can be added to the circuit description. The circuit description can then be used to generate the SDC such as, for instance, on an FPGA. A CPU connected to the SDC can utilize the debugging network to query the pipelines for state information such as, for instance, data indicating that an input queue for a pipeline is empty, data indicating the state of an output queue, or data indicating if a wait condition for a pipeline has been satisfied. A profiling tool can execute on the CPU for use in debugging the SDC.
    Type: Application
    Filed: January 14, 2019
    Publication date: July 16, 2020
    Inventors: Blake D. PELTON, Adrian Michael CAULFIELD
  • Publication number: 20200226227
    Abstract: A disclosed language includes a loop construct that maps to a circuit implementation. The circuit implementation may be used to design or program a synchronous digital circuit. The circuit implementation includes a hardware pipeline that implements the loop's body and condition. The circuit implementation also includes hardware first-in-first-out queues that marshal threads (i.e. collections of local variables) into, around, and out of the pipeline. A pipeline policy circuit limits the number of threads allowed within the pipeline to the capacity of the queue.
    Type: Application
    Filed: January 14, 2019
    Publication date: July 16, 2020
    Inventors: Blake D. PELTON, Adrian Michael CAULFIELD
  • Publication number: 20200225921
    Abstract: A programming language and a compiler are disclosed that optimize the use of look-up tables (LUTs) on a synchronous digital circuit (SDC) such as a field programmable gate array (FPGA) that has been programmed. LUTs are optimized by merging multiple computational operations into the same LUT. A compiler parses source code into an intermediate representation (IR). Each node of the IR that represents an operator (e.g. ‘&’, ‘+’) is mapped to a LUT that implements that operator. The compiler iteratively traverses the IR, merging adjacent LUTs into a LUT that performs both operations and performing input removal optimizations. Additional operators may be merged into a merged LUT until all the LUT's inputs are assigned. Pipeline stages are then generated based on merged LUTs, and an SDC is programmed based on the pipeline and the merged LUT.
    Type: Application
    Filed: January 14, 2019
    Publication date: July 16, 2020
    Inventors: Blake D. PELTON, Adrian Michael CAULFIELD
  • Publication number: 20200225920
    Abstract: A multi-threaded imperative programming language includes language constructs that map to circuit implementations. The constructs can include a condition statement that enables a thread in a hardware pipeline to wait for a specified condition to occur, identify the start and end of a portion of source code instructions that are to be executed atomically, or indicate that a read-modify-write memory operation is to be performed atomically. Source code that includes one or more constructs mapping to a circuit implementation can be compiled to generate a circuit description. The circuit description can be expressed using hardware description language (HDL), for instance. The circuit description can, in turn, be used to generate a synchronous digital circuit that includes the circuit implementation. For example, HDL might be utilized to generate an FPGA image or bitstream that can be utilized to program an FPGA that includes the circuit implementation associate with the language construct.
    Type: Application
    Filed: January 14, 2019
    Publication date: July 16, 2020
    Inventors: Blake D. PELTON, Adrian Michael CAULFIELD
  • Publication number: 20200225919
    Abstract: A multi-threaded imperative programming language includes a language construct defining a function call. A circuit implementation for the construct includes a first pipeline, a second pipeline, and a third pipeline. The first hardware pipeline outputs variables to a first queue and outputs parameters for the function to a second queue. The second hardware pipeline obtains the function parameters from the second queue, performs the function, and stores the results of the function in a third queue. The third hardware pipeline retrieves the results generated by the second pipeline from the second queue and retrieves the variables from the first queue. The third hardware pipeline performs hardware operations specified by the source code using the variables and the results of the function. A single instance of the circuit implementation can be utilized to implement calls to the same function made from multiple locations within source code.
    Type: Application
    Filed: January 14, 2019
    Publication date: July 16, 2020
    Inventors: Blake D. PELTON, Adrian Michael CAULFIELD
  • Patent number: 10705587
    Abstract: Apparatus for processing data is provided with fetch circuitry for fetching program instructions for execution from one or more active threads of instructions having respective program counter values. Pipeline circuitry has a first operating mode and a second operating mode. Mode switching circuitry switches the pipeline circuitry, between the first operating mode and the second operating mode in dependence upon a number of active threads of program instructions having program instructions available to be executed. The first operating mode has a lower average energy consumption per instruction executed than the second operating mode and the second operating mode has a higher average rate of instruction execution for a single thread than the first operating mode. The first operating mode may utilise a barrel processing pipeline to perform interleaved multiple thread processing. The second operating mode may utilise an out-of-order processing pipeline for performing out-of-order processing.
    Type: Grant
    Filed: April 20, 2016
    Date of Patent: July 7, 2020
    Assignee: ARM Limited
    Inventors: Peter Richard Greenhalgh, Simon John Craske, Ian Michael Caulfield, Max John Batley, Allan John Skillman, Antony John Penton