Patents by Inventor Simon Nield

Simon Nield has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190266018
    Abstract: A method of scheduling tasks within a GPU or other highly parallel processing unit is described which is both age-aware and wakeup event driven. Tasks which are received are added to an age-based task queue. Wakeup event bits for task types, or combinations of task types and data groups, are set in response to completion of a task dependency and these wakeup event bits are used to select an oldest task from the queue that satisfies predefined criteria.
    Type: Application
    Filed: May 14, 2019
    Publication date: August 29, 2019
    Inventors: Simon Nield, Adam de Grasse, Luca Iuliano, Ollie Mower, Yoong-Chert Foo
  • Patent number: 10318348
    Abstract: A method of scheduling tasks within a GPU or other highly parallel processing unit is described which is both age-aware and wakeup event driven. Tasks which are received are added to an age-based task queue. Wakeup event bits for task types, or combinations of task types and data groups, are set in response to completion of a task dependency and these wakeup event bits are used to select an oldest task from the queue that satisfies predefined criteria.
    Type: Grant
    Filed: September 25, 2017
    Date of Patent: June 11, 2019
    Assignee: Imagination Technologies Limited
    Inventors: Simon Nield, Adam de Grasse, Luca Iuliano, Ollie Mower, Yoong-Chert Foo
  • Publication number: 20190095360
    Abstract: Apparatus identifies a set of M output memory addresses from a larger set of N input memory addresses containing at least one non-unique memory address. A comparator block performs comparisons of memory addresses from a set of N input memory addresses to generate a binary classification dataset that identifies a subset of addresses from the set of input addresses, where each address in the subset identified by the binary classification dataset is unique within that subset. Combination logic units receive a predetermined selection of bits of the binary classification dataset and sort its received predetermined selection of bits into an intermediary binary string in which the bits are ordered into a first group identifying addresses belonging to the identified subset, and a second group identifying addresses not belonging to the identified subset.
    Type: Application
    Filed: September 24, 2018
    Publication date: March 28, 2019
    Inventors: Luca Iuliano, Simon Nield, Thomas Rose
  • Publication number: 20190087229
    Abstract: A memory subsystem for use with a single-instruction multiple-data (SIMD) processor comprising a plurality of processing units configured for processing one or more workgroups each comprising a plurality of SIMD tasks, the memory subsystem comprising: a shared memory partitioned into a plurality of memory portions for allocation to tasks that are to be processed by the processor; and a resource allocator configured to, in response to receiving a memory resource request for first memory resources in respect of a first-received task of a workgroup, allocate to the workgroup a block of memory portions sufficient in size for each task of the workgroup to receive memory resources in the block equivalent to the first memory resources.
    Type: Application
    Filed: September 17, 2018
    Publication date: March 21, 2019
    Inventors: Luca Iuliano, Simon Nield, Yoong-Chert Foo, Ollie Mower, Jonathan Redshaw
  • Publication number: 20180365016
    Abstract: Methods and parallel processing units for avoiding inter-pipeline data hazards wherein inter-pipeline data hazards are identified at compile time. For each identified inter-pipeline data hazard the primary instruction and secondary instruction(s) thereof are identified as such and are linked by a counter which is used to track that inter-pipeline data hazard. Then when a primary instruction is output by the instruction decoder for execution the value of the counter associated therewith is adjusted (e.g. incremented) to indicate that there is hazard related to the primary instruction, and when primary instruction has been resolved by one of multiple parallel processing pipelines the value of the counter associated therewith is adjusted (e.g. decremented) to indicate that the hazard related to the primary instruction has been resolved.
    Type: Application
    Filed: June 15, 2018
    Publication date: December 20, 2018
    Inventors: Luca Iuliano, Simon Nield, Yoong-Chert Foo, Ollie Mower
  • Publication number: 20180365009
    Abstract: A method of activating scheduling instructions within a parallel processing unit is described. The method comprises decoding, in an instruction decoder, an instruction in a scheduled task in an active state and checking, by an instruction controller, if a swap flag is set in the decoded instruction. If the swap flag in the decoded instruction is set, a scheduler is triggered to de-activate the scheduled task by changing the scheduled task from the active state to a non-active state.
    Type: Application
    Filed: June 18, 2018
    Publication date: December 20, 2018
    Inventors: Simon Nield, Yoong-Chert Foo, Adam de Grasse, Luca Iuliano
  • Publication number: 20180365058
    Abstract: A method of activating scheduling instructions within a parallel processing unit is described. The method includes checking if an ALU targeted by a decoded instruction is full by checking a value of an ALU work fullness counter stored in the instruction controller and associated with the targeted ALU. If the targeted ALU is not full, the decoded instruction is sent to the targeted ALU for execution and the ALU work fullness counter associated with the targeted ALU is updated. If, however, the targeted ALU is full, a scheduler is triggered to de-activate the scheduled task by changing the scheduled task from the active state to a non-active state. When an ALU changes from being full to not being full, the scheduler is triggered to re-activate an oldest scheduled task waiting for the ALU by removing the oldest scheduled task from the non-active state.
    Type: Application
    Filed: June 18, 2018
    Publication date: December 20, 2018
    Inventors: Simon Nield, Yoong-Chert Foo, Adam de Grasse, Luca Iuliano
  • Publication number: 20180365057
    Abstract: A method of scheduling instructions within a parallel processing unit is described. The method comprises decoding, in an instruction decoder, an instruction in a scheduled task in an active state, and checking, by an instruction controller, if an ALU targeted by the decoded instruction is a primary instruction pipeline. If the targeted ALU is a primary instruction pipeline, a list associated with the primary instruction pipeline is checked to determine whether the scheduled task is already included in the list. If the scheduled task is already included in the list, the decoded instruction is sent to the primary instruction pipeline.
    Type: Application
    Filed: June 18, 2018
    Publication date: December 20, 2018
    Inventors: Simon Nield, Yoong-Chert Foo, Adam de Grasse, Luca Iuliano
  • Publication number: 20180203701
    Abstract: A method for a plurality of pipelines, each having a processing element having first and second inputs and first and second lines, wherein at least one of the pipelines includes first and second logic operable to select a respective line so that data is received at the first and second inputs respectively. A first mode is selected and for the at least one pipeline, the first and second lines of that pipeline are selected such that the processing element of that pipeline receives data via the first and second lines of that pipeline, the first line being capable of supplying data that is different to the second line. A second mode is selected and for the at least one pipeline a line of another pipeline is selected, the second line of the at least one pipeline is selected and the same data at the second line is supplied as the first line.
    Type: Application
    Filed: January 16, 2018
    Publication date: July 19, 2018
    Inventors: Simon Nield, Thomas Rose
  • Publication number: 20180088989
    Abstract: A method of scheduling tasks within a GPU or other highly parallel processing unit is described which is both age-aware and wakeup event driven. Tasks which are received are added to an age-based task queue. Wakeup event bits for task types, or combinations of task types and data groups, are set in response to completion of a task dependency and these wakeup event bits are used to select an oldest task from the queue that satisfies predefined criteria.
    Type: Application
    Filed: September 25, 2017
    Publication date: March 29, 2018
    Inventors: Simon Nield, Adam de Grasse, Luca Iuliano, Ollie Mower, Yoong-Chert Foo