Patents by Inventor Yoong-Chert Foo

Yoong-Chert Foo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230169701
    Abstract: A decoder is configured to decode a plurality of texels from a received block of texture data encoded according to the Adaptive Scalable Texture Compression (ASTC) format, and includes a parameter decode unit configured to decode configuration data for the received block of texture data, a colour decode unit configured to decode colour endpoint data for the plurality of texels of the received block in dependence on the configuration data, a weight decode unit configured to decode interpolation weight data for each of the plurality of texels of the received block in dependence on the configuration data, and at least one interpolator unit configured to calculate a colour value for each of the plurality of texels of the received block using the interpolation weight data for that texel and a pair of colour endpoints from the colour endpoint data.
    Type: Application
    Filed: January 25, 2023
    Publication date: June 1, 2023
    Inventors: Kenneth Rovers, Yoong Chert Foo
  • Patent number: 11656908
    Abstract: A memory subsystem for use with a single-instruction multiple-data (SIMD) processor comprising a plurality of processing units configured for processing one or more workgroups each comprising a plurality of SIMD tasks, the memory subsystem comprising: a shared memory partitioned into a plurality of memory portions for allocation to tasks that are to be processed by the processor; and a resource allocator configured to, in response to receiving a memory resource request for first memory resources in respect of a first-received task of a workgroup, allocate to the workgroup a block of memory portions sufficient in size for each task of the workgroup to receive memory resources in the block equivalent to the first memory resources.
    Type: Grant
    Filed: April 1, 2021
    Date of Patent: May 23, 2023
    Assignee: Imagination Technologies Limited
    Inventors: Luca Iuliano, Simon Nield, Yoong-Chert Foo, Ollie Mower, Jonathan Redshaw
  • Publication number: 20230097760
    Abstract: A method of activating scheduling instructions within a parallel processing unit is described. The method comprises decoding, in an instruction decoder, an instruction in a scheduled task in an active state and checking, by an instruction controller, if a swap flag is set in the decoded instruction. If the swap flag in the decoded instruction is set, a scheduler is triggered to de-activate the scheduled task by changing the scheduled task from the active state to a non-active state.
    Type: Application
    Filed: December 5, 2022
    Publication date: March 30, 2023
    Inventors: Simon Nield, Yoong-Chert Foo, Adam de Grasse, Luca Iuliano
  • Publication number: 20230033355
    Abstract: A method of synchronizing a group of scheduled tasks within a parallel processing unit into a known state is described. The method uses a synchronization instruction in a scheduled task which triggers, in response to decoding of the instruction, an instruction decoder to place the scheduled task into a non-active state and forward the decoded synchronization instruction to an atomic ALU for execution. When the atomic ALU executes the decoded synchronization instruction, the atomic ALU performs an operation and check on data assigned to the group ID of the scheduled task and if the check is passed, all scheduled tasks having the particular group ID are removed from the non-active state.
    Type: Application
    Filed: October 13, 2022
    Publication date: February 2, 2023
    Inventors: Ollie Mower, Yoong-Chert Foo
  • Patent number: 11568580
    Abstract: A decoder is configured to decode a plurality of texels from a received block of texture data encoded according to the Adaptive Scalable Texture Compression (ASTC) format, and includes a parameter decode unit configured to decode configuration data for the received block of texture data, a colour decode unit configured to decode colour endpoint data for the plurality of texels of the received block in dependence on the configuration data, a weight decode unit configured to decode interpolation weight data for each of the plurality of texels of the received block in dependence on the configuration data, and at least one interpolator unit configured to calculate a colour value for each of the plurality of texels of the received block using the interpolation weight data for that texel and a pair of colour endpoints from the colour endpoint data.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: January 31, 2023
    Assignee: Imagination Technologies Limited
    Inventors: Kenneth Rovers, Yoong Chert Foo
  • Patent number: 11544892
    Abstract: A decoder unit is configured to decode a plurality of texels in accordance with a texel request, the plurality of texels being encoded across one or more blocks of encoded texture data each encoding a block of texels, and includes a first set of one or more decoders, each of the first set of decoders being configured to decode n texels from a single received block of encoded texture data; a second set of or more decoders, each of the second set of decoders being configured to decode p texels from a single received block of encoded texture data, where p<n; and control logic configured to allocate blocks of encoded texture data to the decoders in accordance with the texel request.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: January 3, 2023
    Assignee: Imagination Technologies Limited
    Inventors: Yoong Chert Foo, Kenneth Rovers
  • Patent number: 11531545
    Abstract: A method of activating scheduling instructions within a parallel processing unit is described. The method comprises decoding, in an instruction decoder, an instruction in a scheduled task in an active state and checking, by an instruction controller, if a swap flag is set in the decoded instruction. If the swap flag in the decoded instruction is set, a scheduler is triggered to de-activate the scheduled task by changing the scheduled task from the active state to a non-active state.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: December 20, 2022
    Assignee: Imagination Technologies Limited
    Inventors: Simon Nield, Yoong-Chert Foo, Adam de Grasse, Luca Iuliano
  • Patent number: 11521343
    Abstract: Disclosed techniques relate to memory space management for graphics processing. In some embodiments, first and second graphics cores are configured to execute instructions for multiple threadgroups. In some embodiments, the threads groups include a first threadgroup with multiple single-instruction multiple-data (SIMD) groups configured to execute a first shader program and a second threadgroup with multiple SIMD groups configured to execute a second, different shader program. Control circuitry may be configured to provide access to data stored in memory circuitry according to a shader memory space. The shader memory space may be accessible to threadgroups executed by the first graphics shader core, including the first and second threadgroups, but is not accessible to threadgroups executed by the second graphics shader core. Disclosed techniques may reduce latency, increase bandwidth available to the shader, reduce coherency cost, or any combination thereof.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: December 6, 2022
    Assignee: Apple Inc.
    Inventors: Terence M. Potter, Yoong Chert Foo, Ali Rabbani Rankouhi, Justin A. Hensley, Jonathan M. Redshaw
  • Patent number: 11500677
    Abstract: A method of synchronizing a group of scheduled tasks within a parallel processing unit into a known state is described. The method uses a synchronization instruction in a scheduled task which triggers, in response to decoding of the instruction, an instruction decoder to place the scheduled task into a non-active state and forward the decoded synchronization instruction to an atomic ALU for execution. When the atomic ALU executes the decoded synchronization instruction, the atomic ALU performs an operation and check on data assigned to the group ID of the scheduled task and if the check is passed, all scheduled tasks having the particular group ID are removed from the non-active state.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: November 15, 2022
    Assignee: Imagination Technologies Limited
    Inventors: Ollie Mower, Yoong-Chert Foo
  • Publication number: 20220327778
    Abstract: A method and system for generating two or three dimensional computer graphics images using multisample antialiasing (MSAA) is provided, which enables memory bandwidth to be conserved. For each of one or more pixels it is determined whether all of a plurality of sample areas of that pixel are located within a particular primitive. For those pixels where it is determined that all the sample areas of that pixel are located within that primitive, a value is stored in a multisample memory for a smaller number of the sample areas of that pixel than the total number of the sample areas of that pixel and data is stored indicating that all the sample areas of that pixel are located within that primitive.
    Type: Application
    Filed: June 28, 2022
    Publication date: October 13, 2022
    Inventors: Yoong Chert Foo, Salil Sahasrabudhe, Andrew Davy
  • Publication number: 20220276895
    Abstract: A method of scheduling instructions within a parallel processing unit is described. The method comprises decoding, in an instruction decoder, an instruction in a scheduled task in an active state, and checking, by an instruction controller, if an ALU targeted by the decoded instruction is a primary instruction pipeline. If the targeted ALU is a primary instruction pipeline, a list associated with the primary instruction pipeline is checked to determine whether the scheduled task is already included in the list. If the scheduled task is already included in the list, the decoded instruction is sent to the primary instruction pipeline.
    Type: Application
    Filed: May 17, 2022
    Publication date: September 1, 2022
    Inventors: Simon Nield, Yoong-Chert Foo, Adam de Grasse, Luca Iuliano
  • Patent number: 11393165
    Abstract: A method and system for generating two or three dimensional computer graphics images using multisample antialiasing (MSAA) is provided, which enables memory bandwidth to be conserved. For each of one or more pixels it is determined whether all of a plurality of sample areas of that pixel are located within a particular primitive. For those pixels where it is determined that all the sample areas of that pixel are located within that primitive, a value is stored in a multisample memory for a smaller number of the sample areas of that pixel than the total number of the sample areas of that pixel and data is stored indicating that all the sample areas of that pixel are located within that primitive.
    Type: Grant
    Filed: March 28, 2020
    Date of Patent: July 19, 2022
    Assignee: Imagination Technologies Limited
    Inventors: Yoong Chert Foo, Salil Sahasrabudhe, Andrew Davy
  • Patent number: 11366691
    Abstract: A method of scheduling instructions within a parallel processing unit is described. The method comprises decoding, in an instruction decoder, an instruction in a scheduled task in an active state, and checking, by an instruction controller, if an ALU targeted by the decoded instruction is a primary instruction pipeline. If the targeted ALU is a primary instruction pipeline, a list associated with the primary instruction pipeline is checked to determine whether the scheduled task is already included in the list. If the scheduled task is already included in the list, the decoded instruction is sent to the primary instruction pipeline.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: June 21, 2022
    Assignee: Imagination Technologies Limited
    Inventors: Simon Nield, Yoong-Chert Foo, Adam de Grasse, Luca Iuliano
  • Publication number: 20220148249
    Abstract: Disclosed techniques relate to memory space management for graphics processing. In some embodiments, first and second graphics cores are configured to execute instructions for multiple threadgroups. In some embodiments, the threads groups include a first threadgroup with multiple single-instruction multiple-data (SIMD) groups configured to execute a first shader program and a second threadgroup with multiple SIMD groups configured to execute a second, different shader program. Control circuitry may be configured to provide access to data stored in memory circuitry according to a shader memory space. The shader memory space may be accessible to threadgroups executed by the first graphics shader core, including the first and second threadgroups, but is not accessible to threadgroups executed by the second graphics shader core. Disclosed techniques may reduce latency, increase bandwidth available to the shader, reduce coherency cost, or any combination thereof.
    Type: Application
    Filed: November 24, 2020
    Publication date: May 12, 2022
    Inventors: Terence M. Potter, Yoong Chert Foo, Ali Rabbani Rankouhi, Justin A. Hensley, Jonathan M. Redshaw
  • Publication number: 20220091885
    Abstract: A method of scheduling tasks within a GPU or other highly parallel processing unit is described which is both age-aware and wakeup event driven. Tasks which are received are added to an age-based task queue. Wakeup event bits for task types, or combinations of task types and data groups, are set in response to completion of a task dependency and these wakeup event bits are used to select an oldest task from the queue that satisfies predefined criteria.
    Type: Application
    Filed: December 1, 2021
    Publication date: March 24, 2022
    Inventors: Simon Nield, Adam de Grasse, Luca Iuliano, Ollie Mower, Yoong-Chert Foo
  • Publication number: 20220075652
    Abstract: A method of activating scheduling instructions within a parallel processing unit includes checking if an ALU targeted by a decoded instruction is full by checking a value of an ALU work fullness counter stored in the instruction controller and associated with the targeted ALU. If the targeted ALU is not full, the decoded instruction is sent to the targeted ALU for execution and the ALU work fullness counter associated with the targeted ALU is updated. If, however, the targeted ALU is full, a scheduler is triggered to de-activate the scheduled task by changing the scheduled task from the active state to a non-active state. When an ALU changes from being full to not being full, the scheduler is triggered to re-activate an oldest scheduled task waiting for the ALU by removing the oldest scheduled task from the non-active state.
    Type: Application
    Filed: November 17, 2021
    Publication date: March 10, 2022
    Inventors: Simon Nield, Yoong-Chert Foo, Adam de Grasse, Luca Iuliano
  • Publication number: 20220066781
    Abstract: Methods and parallel processing units for avoiding inter-pipeline data hazards identified at compile time. For each identified inter-pipeline data hazard the primary instruction and secondary instruction(s) thereof are identified as such and are linked by a counter which is used to track that inter-pipeline data hazard. When a primary instruction is output by the instruction decoder for execution the value of the counter associated therewith is adjusted to indicate that there is hazard related to the primary instruction, and when primary instruction has been resolved by one of multiple parallel processing pipelines the value of the counter associated therewith is adjusted to indicate that the hazard related to the primary instruction has been resolved.
    Type: Application
    Filed: November 10, 2021
    Publication date: March 3, 2022
    Inventors: Luca Iuliano, Simon Nield, Yoong-Chert Foo, Ollie Mower
  • Publication number: 20220051363
    Abstract: A SIMD processing unit processes a plurality of tasks which each include up to a predetermined maximum number of work items. The work items of a task are arranged for executing a common sequence of instructions on respective data items. The data items are arranged into blocks, with some of the blocks including at least one invalid data item. Work items which relate to invalid data items are invalid work items. The SIMD processing unit comprises a group of processing lanes configured to execute instructions of work items of a particular task over a plurality of processing cycles. A control module assembles work items into the tasks based on the validity of the work items, so that invalid work items of the particular task are temporally aligned across the processing lanes. In this way the number of wasted processing slots due to invalid work items may be reduced.
    Type: Application
    Filed: October 29, 2021
    Publication date: February 17, 2022
    Inventors: John Howson, Jonathan Redshaw, Yoong Chert Foo
  • Patent number: 11204800
    Abstract: A method of scheduling tasks within a GPU or other highly parallel processing unit is described which is both age-aware and wakeup event driven. Tasks which are received are added to an age-based task queue. Wakeup event bits for task types, or combinations of task types and data groups, are set in response to completion of a task dependency and these wakeup event bits are used to select an oldest task from the queue that satisfies predefined criteria.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: December 21, 2021
    Assignee: Imagination Technologies Limited
    Inventors: Simon Nield, Adam de Grasse, Luca Iuliano, Ollie Mower, Yoong-Chert Foo
  • Patent number: 11200064
    Abstract: Methods and parallel processing units for avoiding inter-pipeline data hazards wherein inter-pipeline data hazards are identified at compile time. For each identified inter-pipeline data hazard the primary instruction and secondary instruction(s) thereof are identified as such and are linked by a counter which is used to track that inter-pipeline data hazard. Then when a primary instruction is output by the instruction decoder for execution the value of the counter associated therewith is adjusted (e.g. incremented) to indicate that there is hazard related to the primary instruction, and when primary instruction has been resolved by one of multiple parallel processing pipelines the value of the counter associated therewith is adjusted (e.g. decremented) to indicate that the hazard related to the primary instruction has been resolved.
    Type: Grant
    Filed: October 14, 2020
    Date of Patent: December 14, 2021
    Assignee: Imagination Technologies Limited
    Inventors: Luca Iuliano, Simon Nield, Yoong-Chert Foo, Ollie Mower