Patents by Inventor Adrian J. Anderson

Adrian J. Anderson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11755474
    Abstract: Tile based interleaving and de-interleaving of row-column interleaved data is described. In one example, the de-interleaving is divided into two memory transfer stages, the first from an on-chip memory to a DRAM and the second from the DRAM to an on-chip memory. Each stage operates on part of a row-column interleaved block of data and re-orders the data items, such that the output of the second stage comprises de-interleaved data. In the first stage, data items are read from the on-chip memory according to a non-linear sequence of memory read addresses and written to the DRAM. In the second stage, data items are read from the DRAM according to bursts of linear address sequences which make efficient use of the DRAM interface and written back to on-chip memory according to a non-linear sequence of memory write addresses.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: September 12, 2023
    Assignee: Imagination Technologies Limited
    Inventors: Paul Murrin, Adrian J. Anderson, Mohammed El-Hajjar
  • Patent number: 11372546
    Abstract: A technique for transferring data in a digital signal processing system is described. In one example, the digital signal processing system comprises a number of fixed function accelerators, each connected to a memory access controller and each configured to read data from a memory device, perform one or more operations on the data, and write data to the memory device. To avoid hardwiring the fixed function accelerators together, and to provide a configurable digital signal processing system, a multi-threaded processor controls the transfer of data between the fixed function accelerators and the memory. Each processor thread is allocated to a memory access channel, and the threads are configured to detect an occurrence of an event and, responsive to this, control the memory access controller to enable a selected fixed function accelerator to read data from or write data to the memory device via its memory access channel.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: June 28, 2022
    Assignee: Nordic Semiconductor ASA
    Inventors: Adrian J. Anderson, Gary C. Wass, Gareth J. Davies
  • Publication number: 20220075723
    Abstract: Tile based interleaving and de-interleaving of row-column interleaved data is described. In one example, the de-interleaving is divided into two memory transfer stages, the first from an on-chip memory to a DRAM and the second from the DRAM to an on-chip memory. Each stage operates on part of a row-column interleaved block of data and re-orders the data items, such that the output of the second stage comprises de-interleaved data. In the first stage, data items are read from the on-chip memory according to a non-linear sequence of memory read addresses and written to the DRAM. In the second stage, data items are read from the DRAM according to bursts of linear address sequences which make efficient use of the DRAM interface and written back to on-chip memory according to a non-linear sequence of memory write addresses.
    Type: Application
    Filed: November 18, 2021
    Publication date: March 10, 2022
    Inventors: Paul Murrin, Adrian J. Anderson, Mohammed El-Hajjar
  • Patent number: 11210217
    Abstract: Tile based interleaving and de-interleaving of row-column interleaved data is described. In one example, the de-interleaving is divided into two memory transfer stages, the first from an on-chip memory to a DRAM and the second from the DRAM to an on-chip memory. Each stage operates on part of a row-column interleaved block of data and re-orders the data items, such that the output of the second stage comprises de-interleaved data. In the first stage, data items are read from the on-chip memory according to a non-linear sequence of memory read addresses and written to the DRAM. In the second stage, data items are read from the DRAM according to bursts of linear address sequences which make efficient use of the DRAM interface and written back to on-chip memory according to a non-linear sequence of memory write addresses.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: December 28, 2021
    Assignee: Imagination Technologies Limited
    Inventors: Paul Murrin, Adrian J. Anderson, Mohammed El-Hajjar
  • Publication number: 20200242029
    Abstract: Tile based interleaving and de-interleaving of row-column interleaved data is described. In one example, the de-interleaving is divided into two memory transfer stages, the first from an on-chip memory to a DRAM and the second from the DRAM to an on-chip memory. Each stage operates on part of a row-column interleaved block of data and re-orders the data items, such that the output of the second stage comprises de-interleaved data. In the first stage, data items are read from the on-chip memory according to a non-linear sequence of memory read addresses and written to the DRAM. In the second stage, data items are read from the DRAM according to bursts of linear address sequences which make efficient use of the DRAM interface and written back to on-chip memory according to a non-linear sequence of memory write addresses.
    Type: Application
    Filed: April 10, 2020
    Publication date: July 30, 2020
    Inventors: Paul Murrin, Adrian J. Anderson, Mohammed El-Hajjar
  • Patent number: 10657050
    Abstract: Tile based interleaving and de-interleaving of row-column interleaved data is described. In one example, the de-interleaving is divided into two memory transfer stages, the first from an on-chip memory to a DRAM and the second from the DRAM to an on-chip memory. Each stage operates on part of a row-column interleaved block of data and re-orders the data items, such that the output of the second stage comprises de-interleaved data. In the first stage, data items are read from the on-chip memory according to a non-linear sequence of memory read addresses and written to the DRAM. In the second stage, data items are read from the DRAM according to bursts of linear address sequences which make efficient use of the DRAM interface and written back to on-chip memory according to a non-linear sequence of memory write addresses.
    Type: Grant
    Filed: April 11, 2019
    Date of Patent: May 19, 2020
    Assignee: Imagination Technologies Limited
    Inventors: Paul Murrin, Adrian J. Anderson, Mohammed El-Hajjar
  • Patent number: 10387155
    Abstract: A processing system includes a program processor for executing a program, and a dedicated processor for executing operations of a particular type (e.g. vector processing operations). The program processor uses an interfacing module and a group of two or more register banks to offload operations of the particular type to the dedicated processor for execution thereon. While the dedicated processor is accessing one register bank for executing a current operation, the interfacing module can concurrently load data for a subsequent operation into a different one of the register banks. The use of multiple register banks allows the dedicated processor to spend a greater proportion of its time executing operations.
    Type: Grant
    Filed: March 24, 2016
    Date of Patent: August 20, 2019
    Assignee: Imagination Technologies Limited
    Inventors: Paul Murrin, Gareth Davies, Adrian J. Anderson
  • Publication number: 20190236006
    Abstract: Tile based interleaving and de-interleaving of row-column interleaved data is described. In one example, the de-interleaving is divided into two memory transfer stages, the first from an on-chip memory to a DRAM and the second from the DRAM to an on-chip memory. Each stage operates on part of a row-column interleaved block of data and re-orders the data items, such that the output of the second stage comprises de-interleaved data. In the first stage, data items are read from the on-chip memory according to a non-linear sequence of memory read addresses and written to the DRAM. In the second stage, data items are read from the DRAM according to bursts of linear address sequences which make efficient use of the DRAM interface and written back to on-chip memory according to a non-linear sequence of memory write addresses.
    Type: Application
    Filed: April 11, 2019
    Publication date: August 1, 2019
    Inventors: Paul Murrin, Adrian J. Anderson, Mohammed El-Hajjar
  • Publication number: 20190220199
    Abstract: A technique for transferring data in a digital signal processing system is described. In one example, the digital signal processing system comprises a number of fixed function accelerators, each connected to a memory access controller and each configured to read data from a memory device, perform one or more operations on the data, and write data to the memory device. To avoid hardwiring the fixed function accelerators together, and to provide a configurable digital signal processing system, a multi-threaded processor controls the transfer of data between the fixed function accelerators and the memory. Each processor thread is allocated to a memory access channel, and the threads are configured to detect an occurrence of an event and, responsive to this, control the memory access controller to enable a selected fixed function accelerator to read data from or write data to the memory device via its memory access channel.
    Type: Application
    Filed: March 25, 2019
    Publication date: July 18, 2019
    Inventors: Adrian J. Anderson, Gary C. Wass, Gareth J. Davies
  • Patent number: 10296456
    Abstract: Tile based interleaving and de-interleaving of row-column interleaved data is described. In one example, the de-interleaving is divided into two memory transfer stages, the first from an on-chip memory to a DRAM and the second from the DRAM to an on-chip memory. Each stage operates on part of a row-column interleaved block of data and re-orders the data items, such that the output of the second stage comprises de-interleaved data. In the first stage, data items are read from the on-chip memory according to a non-linear sequence of memory read addresses and written to the DRAM. In the second stage, data items are read from the DRAM according to bursts of linear address sequences which make efficient use of the DRAM interface and written back to on-chip memory according to a non-linear sequence of memory write addresses.
    Type: Grant
    Filed: March 12, 2013
    Date of Patent: May 21, 2019
    Assignee: Imagination Technologies Limited
    Inventors: Paul Murrin, Adrian J. Anderson, Mohammed El-Hajjar
  • Patent number: 10268377
    Abstract: A technique for transferring data in a digital signal processing system is described. In one example, the digital signal processing system comprises a number of fixed function accelerators, each connected to a memory access controller and each configured to read data from a memory device, perform one or more operations on the data, and write data to the memory device. To avoid hardwiring the fixed function accelerators together, and to provide a configurable digital signal processing system, a multi-threaded processor controls the transfer of data between the fixed function accelerators and the memory. Each processor thread is allocated to a memory access channel, and the threads are configured to detect an occurrence of an event and, responsive to this, control the memory access controller to enable a selected fixed function accelerator to read data from or write data to the memory device via its memory access channel.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: April 23, 2019
    Assignee: Imagination Technologies Limited
    Inventors: Adrian J. Anderson, Gary C. Wass, Gareth J. Davies
  • Patent number: 9819528
    Abstract: Methods and apparatus for efficient demapping of constellations are described. In an embodiment, these methods may be implemented within a digital communications receiver, such as a Digital Terrestrial Television receiver. The method reduces the number of distance metric calculations which are required to calculate soft information in the demapper by locating the closest constellation point to the received symbol. This closest constellation point is identified based on a comparison of distance metrics which are calculated parallel to either the I- or Q-axis. The number of distance metric calculations may be reduced still further by identifying a local minimum constellation point for each bit in the received symbol and these constellation points are identified using a similar method to the closest constellation point. Where the system uses rotated constellations, the received symbol may be unrotated before any constellation points are identified.
    Type: Grant
    Filed: June 14, 2016
    Date of Patent: November 14, 2017
    Assignee: Imagination Technologies Limited
    Inventors: Mohammed El-Hajjar, Paul Murrin, Adrian J. Anderson
  • Patent number: 9684592
    Abstract: Memory address generation for digital signal processing is described. In one example, a digital signal processing system-on-chip utilizes an on-chip memory space that is shared between functional blocks of the system. An on-chip DMA controller comprises an address generator that can generate sequences of read and write memory addresses for data items being transferred between the on-chip memory and a paged memory device, or internally within the system. The address generator is configurable and can generate non-linear sequences for the read and/or write addresses. This enables aspects of interleaving/deinterleaving operations to be performed as part of a data transfer between internal or paged memory. As a result, a dedicated memory for interleaving operations is not required. In further examples, the address generator can be configured to generate read and/or write addresses that take into account limitations of particular memory devices when performing interleaving, such as DRAM.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: June 20, 2017
    Assignee: Imagination Technologies Limited
    Inventor: Adrian J. Anderson
  • Publication number: 20170160947
    Abstract: A technique for transferring data in a digital signal processing system is described. In one example, the digital signal processing system comprises a number of fixed function accelerators, each connected to a memory access controller and each configured to read data from a memory device, perform one or more operations on the data, and write data to the memory device. To avoid hardwiring the fixed function accelerators together, and to provide a configurable digital signal processing system, a multi-threaded processor controls the transfer of data between the fixed function accelerators and the memory. Each processor thread is allocated to a memory access channel, and the threads are configured to detect an occurrence of an event and, responsive to this, control the memory access controller to enable a selected fixed function accelerator to read data from or write data to the memory device via its memory access channel.
    Type: Application
    Filed: February 15, 2017
    Publication date: June 8, 2017
    Inventors: Adrian J. Anderson, Gary C. Wass, Gareth J. Davies
  • Publication number: 20170068616
    Abstract: Memory address generation for digital signal processing is described. In one example, a digital signal processing system-on-chip utilises an on-chip memory space that is shared between functional blocks of the system. An on-chip DMA controller comprises an address generator that can generate sequences of read and write memory addresses for data items being transferred between the on-chip memory and a paged memory device, or internally within the system. The address generator is configurable and can generate non-linear sequences for the read and/or write addresses. This enables aspects of interleaving/deinterleaving operations to be performed as part of a data transfer between internal or paged memory. As a result, a dedicated memory for interleaving operations is not required. In further examples, the address generator can be configured to generate read and/or write addresses that take into account limitations of particular memory devices when performing interleaving, such as DRAM.
    Type: Application
    Filed: November 21, 2016
    Publication date: March 9, 2017
    Inventor: Adrian J. Anderson
  • Patent number: 9575900
    Abstract: A technique for transferring data in a digital signal processing system is described. In one example, the digital signal processing system comprises a number of fixed function accelerators, each connected to a memory access controller and each configured to read data from a memory device, perform one or more operations on the data, and write data to the memory device. To avoid hardwiring the fixed function accelerators together, and to provide a configurable digital signal processing system, a multi-threaded processor controls the transfer of data between the fixed function accelerators and the memory. Each processor thread is allocated to a memory access channel, and the threads are configured to detect an occurrence of an event and, responsive to this, control the memory access controller to enable a selected fixed function accelerator to read data from or write data to the memory device via its memory access channel.
    Type: Grant
    Filed: February 19, 2015
    Date of Patent: February 21, 2017
    Assignee: Imagination Technologies Limited
    Inventors: Adrian J. Anderson, Gary C. Wass, Gareth J. Davies
  • Patent number: 9529747
    Abstract: Memory address generation for digital signal processing is described. In one example, a digital signal processing system-on-chip utilizes an on-chip memory space that is shared between functional blocks of the system. An on-chip DMA controller comprises an address generator that can generate sequences of read and write memory addresses for data items being transferred between the on-chip memory and a paged memory device, or internally within the system. The address generator is configurable and can generate non-linear sequences for the read and/or write addresses. This enables aspects of interleaving/deinterleaving operations to be performed as part of a data transfer between internal or paged memory. As a result, a dedicated memory for interleaving operations is not required. In further examples, the address generator can be configured to generate read and/or write addresses that take into account limitations of particular memory devices when performing interleaving, such as DRAM.
    Type: Grant
    Filed: August 28, 2013
    Date of Patent: December 27, 2016
    Assignee: Imagination Technologies Limited
    Inventor: Adrian J Anderson
  • Publication number: 20160294597
    Abstract: Methods and apparatus for efficient demapping of constellations are described. In an embodiment, these methods may be implemented within a digital communications receiver, such as a Digital Terrestrial Television receiver. The method reduces the number of distance metric calculations which are required to calculate soft information in the demapper by locating the closest constellation point to the received symbol. This closest constellation point is identified based on a comparison of distance metrics which are calculated parallel to either the I- or Q-axis. The number of distance metric calculations may be reduced still further by identifying a local minimum constellation point for each bit in the received symbol and these constellation points are identified using a similar method to the closest constellation point. Where the system uses rotated constellations, the received symbol may be unrotated before any constellation points are identified.
    Type: Application
    Filed: June 14, 2016
    Publication date: October 6, 2016
    Inventors: Mohammed El-Hajjar, Paul Murrin, Adrian J. Anderson
  • Publication number: 20160283439
    Abstract: A SIMD processing module is provided, comprising multiple vector processing units (“VUs”), which can be used to execute an instruction on respective parts (or “subvectors”) within a vector. A control unit determines a vector position indication for each of the VUs to indicate which part of the vector that VU is to execute the instruction on. Therefore, the vector is conceptually divided into subvectors with the respective VUs executing the instruction on the respective subvectors in parallel. Each VU can then execute the instruction as intended, but only on a subsection of the whole vector. This allows an instruction that is written for execution on an n-way VU to be executed by multiple n-way VUs, each starting at different points of the vector, such that the instruction can be executed on more than n of the data items of the vector in parallel.
    Type: Application
    Filed: March 25, 2016
    Publication date: September 29, 2016
    Inventors: Paul Murrin, Gareth Davies, Adrian J. Anderson
  • Publication number: 20160283235
    Abstract: A processing system includes a program processor for executing a program, and a dedicated processor for executing operations of a particular type (e.g. vector processing operations). The program processor uses an interfacing module and a group of two or more register banks to offload operations of the particular type to the dedicated processor for execution thereon. Whilst the dedicated processor is accessing one register bank for executing a current operation, the interfacing module can concurrently load data for a subsequent operation into a different one of the register banks. The use of multiple register banks allows the dedicated processor to spend a greater proportion of its time executing operations.
    Type: Application
    Filed: March 24, 2016
    Publication date: September 29, 2016
    Inventors: Paul Murrin, Gareth Davies, Adrian J. Anderson