Patents by Inventor Alan Vines

Alan Vines has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240137044
    Abstract: A data compression method comprises encoding groups of data items by generating, for each group, header data comprising h-bits and a plurality of body portions each comprising b-bits and each body portion corresponding to a data item in the group. The value of h may be fixed for all groups and the value of b is fixed within a group, wherein the header data for a group comprises an indication of b for the body portions of that group. In various examples, b=0 and so there are no body portions. In examples where b is not equal to zero, a body data field is generated for each group by interleaving bits from the body portions corresponding to data items in the group. The resultant encoded data block, comprising the header data and, where present, the body data field can be written to memory.
    Type: Application
    Filed: December 31, 2023
    Publication date: April 25, 2024
    Inventors: Simon Fenney, Greg Clark, Alan Vines
  • Publication number: 20240028256
    Abstract: A hardware unit for manipulating data stored in a memory comprises an internal buffer, a memory reading block, configured to read the data from the memory and write the data to the internal buffer. a memory writing block, configured to read the data from the internal buffer and write the data to the memory. The hardware unit optionally also comprises a control channel between the memory reading block and the memory writing block, wherein the memory reading block and the memory writing block are configured to communicate via the control channel to maintain synchronisation between them when writing the data to the internal buffer and reading the data from the internal buffer, respectively. The hardware unit may be configured to apply one or more transformations to multidimensional data in the memory. The hardware unit may be configured to traverse the multidimensional array using a plurality of nested loops.
    Type: Application
    Filed: October 3, 2023
    Publication date: January 25, 2024
    Inventors: Alan Vines, Stephen Spain, Fernando Escobar
  • Patent number: 11863208
    Abstract: A data compression method comprises encoding groups of data items by generating, for each group, header data comprising h-bits and a plurality of body portions each comprising b-bits and each body portion corresponding to a data item in the group. The value of h may be fixed for all groups and the value of b is fixed within a group, wherein the header data for a group comprises an indication of b for the body portions of that group. In various examples, b=0 and so there are no body portions. In examples where b is not equal to zero, a body data field is generated for each group by interleaving bits from the body portions corresponding to data items in the group. The resultant encoded data block, comprising the header data and, where present, the body data field can be written to memory.
    Type: Grant
    Filed: March 10, 2021
    Date of Patent: January 2, 2024
    Assignee: Imagination Technologies Limited
    Inventors: Simon Fenney, Greg Clark, Alan Vines
  • Patent number: 11775206
    Abstract: A hardware unit for manipulating data stored in a memory comprises an internal buffer, a memory reading block, configured to read the data from the memory and write the data to the internal buffer. a memory writing block, configured to read the data from the internal buffer and write the data to the memory. The hardware unit optionally also comprises a control channel between the memory reading block and the memory writing block, wherein the memory reading block and the memory writing block are configured to communicate via the control channel to maintain synchronisation between them when writing the data to the internal buffer and reading the data from the internal buffer, respectively. The hardware unit may be configured to apply one or more transformations to multidimensional data in the memory. The hardware unit may be configured to traverse the multidimensional array using a plurality of nested loops.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: October 3, 2023
    Assignee: Imagination Technologies Limited
    Inventors: Alan Vines, Stephen Spain, Fernando Escobar
  • Publication number: 20230259743
    Abstract: A neural network accelerator includes a plurality of hardware processing units, each hardware processing unit comprising hardware to accelerate performing one or more neural network operations on data; and a crossbar coupled to each hardware processing unit of the plurality of hardware processing units and configured to selectively form, from a plurality of selectable pipelines, a pipeline from one or more of the hardware processing units of the plurality of hardware processing units to process input data to the neural network accelerator. The plurality of hardware processing units comprising (i) a convolution processing unit configured to accelerate performing convolution operations on data, and (ii) a configurable pooling processing unit configured to selectively perform an operation of a plurality of selectable operations on data, the plurality of selectable operations comprising a depth-wise convolution operation and one or more pooling operations.
    Type: Application
    Filed: December 30, 2022
    Publication date: August 17, 2023
    Inventors: Javier Sanchez, David Hough, Alan Vines
  • Publication number: 20230177320
    Abstract: A neural network accelerator that has a configurable hardware pipeline includes a plurality of hardware processing units, each hardware processing unit comprising hardware to accelerate performing one or more neural network operations on a tensor of data; and a crossbar coupled to each hardware processing unit of the plurality of hardware processing units, the crossbar configured to selectively form, from a plurality of selectable pipelines, a pipeline from one or more of the hardware processing units of the plurality of hardware processing units to process input data to the neural network accelerator. At least one of the hardware processing units is configurable to transmit or receive a tensor via the crossbar in a selected processing order of a plurality of selectable processing orders, and the selected processing order is based on the pipeline formed by the crossbar.
    Type: Application
    Filed: September 30, 2022
    Publication date: June 8, 2023
    Inventors: Javier Sanchez, Alan Vines
  • Publication number: 20230177318
    Abstract: Methods and devices for configuring a neural network accelerator that comprises a configurable hardware pipeline. The neural network accelerator includes a plurality of hardware processing units and a crossbar coupled to each hardware processing unit of the plurality of hardware processing units. Each hardware processing unit comprises hardware to accelerate performing one or more neural network operations on received data.
    Type: Application
    Filed: September 30, 2022
    Publication date: June 8, 2023
    Inventors: Javier Sanchez, Alan Vines
  • Publication number: 20230177321
    Abstract: A neural network accelerator that has a configurable hardware pipeline includes a plurality of hardware processing units and a crossbar configured to selectively form, from a plurality of selectable pipelines, a pipeline from one or more of the hardware processing units to process input data to the neural network accelerator. Each hardware processing unit comprises hardware to accelerate performing one or more neural network operations on data, and the plurality of hardware processing units comprises a convolution processing unit configured to accelerate performing convolution operations on data.
    Type: Application
    Filed: September 30, 2022
    Publication date: June 8, 2023
    Inventors: Javier Sanchez, Alan Vines
  • Publication number: 20220100466
    Abstract: A hardware downscaler and an architecture for implementing a FIR filter in which the downscaler can be arranged for downscaling by a half in one dimension. The downscaler can comprise: hardware logic implementing a first three-tap FIR filter; and hardware logic implementing a second three-tap FIR filter; wherein the output from the hardware logic implementing the first three-tap filter is provided as an input to the hardware logic implementing the second three-tap filter.
    Type: Application
    Filed: September 20, 2021
    Publication date: March 31, 2022
    Inventors: Timothy Lee, Alan Vines, David Hough
  • Publication number: 20220092731
    Abstract: A hardware downscaling module and downscaling methods for downscaling a two-dimensional array of values. The hardware downscaling unit comprises a first group of one-dimensional downscalers; and a second group of one-dimensional downscalers; wherein the first group of one-dimensional downscalers is arranged to receive a two-dimensional array of values and to perform downscaling in series in a first dimension; and wherein the second group of one-dimensional downscalers is arranged to receive an output from the first group of one-dimensional downscalers and to perform downscaling in series in a second dimension.
    Type: Application
    Filed: September 20, 2021
    Publication date: March 24, 2022
    Inventors: Timothy Lee, Alan Vines, David Hough
  • Publication number: 20210373801
    Abstract: A hardware unit for manipulating data stored in a memory comprises an internal buffer, a memory reading block, configured to read the data from the memory and write the data to the internal buffer. a memory writing block, configured to read the data from the internal buffer and write the data to the memory. The hardware unit optionally also comprises a control channel between the memory reading block and the memory writing block, wherein the memory reading block and the memory writing block are configured to communicate via the control channel to maintain synchronisation between them when writing the data to the internal buffer and reading the data from the internal buffer, respectively. The hardware unit may be configured to apply one or more transformations to multidimensional data in the memory. The hardware unit may be configured to traverse the multidimensional array using a plurality of nested loops.
    Type: Application
    Filed: June 2, 2021
    Publication date: December 2, 2021
    Inventors: Alan Vines, Stephen Spain, Fernando Escobar
  • Publication number: 20210194500
    Abstract: A data compression method comprises encoding groups of data items by generating, for each group, header data comprising h-bits and a plurality of body portions each comprising b-bits and each body portion corresponding to a data item in the group. The value of h may be fixed for all groups and the value of b is fixed within a group, wherein the header data for a group comprises an indication of b for the body portions of that group. In various examples, b=0 and so there are no body portions. In examples where b is not equal to zero, a body data field is generated for each group by interleaving bits from the body portions corresponding to data items in the group. The resultant encoded data block, comprising the header data and, where present, the body data field can be written to memory.
    Type: Application
    Filed: March 10, 2021
    Publication date: June 24, 2021
    Inventors: Simon Fenney, Greg Clark, Alan Vines
  • Patent number: 10972126
    Abstract: A data compression method comprises encoding groups of data items by generating, for each group, header data comprising h-bits and a plurality of body portions each comprising b-bits and each body portion corresponding to a data item in the group. The value of h may be fixed for all groups and the value of b is fixed within a group, wherein the header data for a group comprises an indication of b for the body portions of that group. In various examples, b=0 and so there are no body portions. In examples where b is not equal to zero, a body data field is generated for each group by interleaving bits from the body portions corresponding to data items in the group. The resultant encoded data block, comprising the header data and, where present, the body data field can be written to memory.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: April 6, 2021
    Assignee: Imagination Technologies Limited
    Inventors: Simon Fenney, Greg Clark, Alan Vines
  • Publication number: 20200177200
    Abstract: A data compression method comprises encoding groups of data items by generating, for each group, header data comprising h-bits and a plurality of body portions each comprising b-bits and each body portion corresponding to a data item in the group. The value of h may be fixed for all groups and the value of b is fixed within a group, wherein the header data for a group comprises an indication of b for the body portions of that group. In various examples, b=0 and so there are no body portions. In examples where b is not equal to zero, a body data field is generated for each group by interleaving bits from the body portions corresponding to data items in the group. The resultant encoded data block, comprising the header data and, where present, the body data field can be written to memory.
    Type: Application
    Filed: November 27, 2019
    Publication date: June 4, 2020
    Inventors: Simon Fenney, Greg Clark, Alan Vines