Patents by Inventor Alan Vines
Alan Vines has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240137044Abstract: A data compression method comprises encoding groups of data items by generating, for each group, header data comprising h-bits and a plurality of body portions each comprising b-bits and each body portion corresponding to a data item in the group. The value of h may be fixed for all groups and the value of b is fixed within a group, wherein the header data for a group comprises an indication of b for the body portions of that group. In various examples, b=0 and so there are no body portions. In examples where b is not equal to zero, a body data field is generated for each group by interleaving bits from the body portions corresponding to data items in the group. The resultant encoded data block, comprising the header data and, where present, the body data field can be written to memory.Type: ApplicationFiled: December 31, 2023Publication date: April 25, 2024Inventors: Simon Fenney, Greg Clark, Alan Vines
-
Publication number: 20240028256Abstract: A hardware unit for manipulating data stored in a memory comprises an internal buffer, a memory reading block, configured to read the data from the memory and write the data to the internal buffer. a memory writing block, configured to read the data from the internal buffer and write the data to the memory. The hardware unit optionally also comprises a control channel between the memory reading block and the memory writing block, wherein the memory reading block and the memory writing block are configured to communicate via the control channel to maintain synchronisation between them when writing the data to the internal buffer and reading the data from the internal buffer, respectively. The hardware unit may be configured to apply one or more transformations to multidimensional data in the memory. The hardware unit may be configured to traverse the multidimensional array using a plurality of nested loops.Type: ApplicationFiled: October 3, 2023Publication date: January 25, 2024Inventors: Alan Vines, Stephen Spain, Fernando Escobar
-
Patent number: 11863208Abstract: A data compression method comprises encoding groups of data items by generating, for each group, header data comprising h-bits and a plurality of body portions each comprising b-bits and each body portion corresponding to a data item in the group. The value of h may be fixed for all groups and the value of b is fixed within a group, wherein the header data for a group comprises an indication of b for the body portions of that group. In various examples, b=0 and so there are no body portions. In examples where b is not equal to zero, a body data field is generated for each group by interleaving bits from the body portions corresponding to data items in the group. The resultant encoded data block, comprising the header data and, where present, the body data field can be written to memory.Type: GrantFiled: March 10, 2021Date of Patent: January 2, 2024Assignee: Imagination Technologies LimitedInventors: Simon Fenney, Greg Clark, Alan Vines
-
Patent number: 11775206Abstract: A hardware unit for manipulating data stored in a memory comprises an internal buffer, a memory reading block, configured to read the data from the memory and write the data to the internal buffer. a memory writing block, configured to read the data from the internal buffer and write the data to the memory. The hardware unit optionally also comprises a control channel between the memory reading block and the memory writing block, wherein the memory reading block and the memory writing block are configured to communicate via the control channel to maintain synchronisation between them when writing the data to the internal buffer and reading the data from the internal buffer, respectively. The hardware unit may be configured to apply one or more transformations to multidimensional data in the memory. The hardware unit may be configured to traverse the multidimensional array using a plurality of nested loops.Type: GrantFiled: June 2, 2021Date of Patent: October 3, 2023Assignee: Imagination Technologies LimitedInventors: Alan Vines, Stephen Spain, Fernando Escobar
-
Publication number: 20230259743Abstract: A neural network accelerator includes a plurality of hardware processing units, each hardware processing unit comprising hardware to accelerate performing one or more neural network operations on data; and a crossbar coupled to each hardware processing unit of the plurality of hardware processing units and configured to selectively form, from a plurality of selectable pipelines, a pipeline from one or more of the hardware processing units of the plurality of hardware processing units to process input data to the neural network accelerator. The plurality of hardware processing units comprising (i) a convolution processing unit configured to accelerate performing convolution operations on data, and (ii) a configurable pooling processing unit configured to selectively perform an operation of a plurality of selectable operations on data, the plurality of selectable operations comprising a depth-wise convolution operation and one or more pooling operations.Type: ApplicationFiled: December 30, 2022Publication date: August 17, 2023Inventors: Javier Sanchez, David Hough, Alan Vines
-
Publication number: 20230177320Abstract: A neural network accelerator that has a configurable hardware pipeline includes a plurality of hardware processing units, each hardware processing unit comprising hardware to accelerate performing one or more neural network operations on a tensor of data; and a crossbar coupled to each hardware processing unit of the plurality of hardware processing units, the crossbar configured to selectively form, from a plurality of selectable pipelines, a pipeline from one or more of the hardware processing units of the plurality of hardware processing units to process input data to the neural network accelerator. At least one of the hardware processing units is configurable to transmit or receive a tensor via the crossbar in a selected processing order of a plurality of selectable processing orders, and the selected processing order is based on the pipeline formed by the crossbar.Type: ApplicationFiled: September 30, 2022Publication date: June 8, 2023Inventors: Javier Sanchez, Alan Vines
-
Publication number: 20230177318Abstract: Methods and devices for configuring a neural network accelerator that comprises a configurable hardware pipeline. The neural network accelerator includes a plurality of hardware processing units and a crossbar coupled to each hardware processing unit of the plurality of hardware processing units. Each hardware processing unit comprises hardware to accelerate performing one or more neural network operations on received data.Type: ApplicationFiled: September 30, 2022Publication date: June 8, 2023Inventors: Javier Sanchez, Alan Vines
-
Publication number: 20230177321Abstract: A neural network accelerator that has a configurable hardware pipeline includes a plurality of hardware processing units and a crossbar configured to selectively form, from a plurality of selectable pipelines, a pipeline from one or more of the hardware processing units to process input data to the neural network accelerator. Each hardware processing unit comprises hardware to accelerate performing one or more neural network operations on data, and the plurality of hardware processing units comprises a convolution processing unit configured to accelerate performing convolution operations on data.Type: ApplicationFiled: September 30, 2022Publication date: June 8, 2023Inventors: Javier Sanchez, Alan Vines
-
Publication number: 20220100466Abstract: A hardware downscaler and an architecture for implementing a FIR filter in which the downscaler can be arranged for downscaling by a half in one dimension. The downscaler can comprise: hardware logic implementing a first three-tap FIR filter; and hardware logic implementing a second three-tap FIR filter; wherein the output from the hardware logic implementing the first three-tap filter is provided as an input to the hardware logic implementing the second three-tap filter.Type: ApplicationFiled: September 20, 2021Publication date: March 31, 2022Inventors: Timothy Lee, Alan Vines, David Hough
-
Publication number: 20220092731Abstract: A hardware downscaling module and downscaling methods for downscaling a two-dimensional array of values. The hardware downscaling unit comprises a first group of one-dimensional downscalers; and a second group of one-dimensional downscalers; wherein the first group of one-dimensional downscalers is arranged to receive a two-dimensional array of values and to perform downscaling in series in a first dimension; and wherein the second group of one-dimensional downscalers is arranged to receive an output from the first group of one-dimensional downscalers and to perform downscaling in series in a second dimension.Type: ApplicationFiled: September 20, 2021Publication date: March 24, 2022Inventors: Timothy Lee, Alan Vines, David Hough
-
Publication number: 20210373801Abstract: A hardware unit for manipulating data stored in a memory comprises an internal buffer, a memory reading block, configured to read the data from the memory and write the data to the internal buffer. a memory writing block, configured to read the data from the internal buffer and write the data to the memory. The hardware unit optionally also comprises a control channel between the memory reading block and the memory writing block, wherein the memory reading block and the memory writing block are configured to communicate via the control channel to maintain synchronisation between them when writing the data to the internal buffer and reading the data from the internal buffer, respectively. The hardware unit may be configured to apply one or more transformations to multidimensional data in the memory. The hardware unit may be configured to traverse the multidimensional array using a plurality of nested loops.Type: ApplicationFiled: June 2, 2021Publication date: December 2, 2021Inventors: Alan Vines, Stephen Spain, Fernando Escobar
-
Publication number: 20210194500Abstract: A data compression method comprises encoding groups of data items by generating, for each group, header data comprising h-bits and a plurality of body portions each comprising b-bits and each body portion corresponding to a data item in the group. The value of h may be fixed for all groups and the value of b is fixed within a group, wherein the header data for a group comprises an indication of b for the body portions of that group. In various examples, b=0 and so there are no body portions. In examples where b is not equal to zero, a body data field is generated for each group by interleaving bits from the body portions corresponding to data items in the group. The resultant encoded data block, comprising the header data and, where present, the body data field can be written to memory.Type: ApplicationFiled: March 10, 2021Publication date: June 24, 2021Inventors: Simon Fenney, Greg Clark, Alan Vines
-
Patent number: 10972126Abstract: A data compression method comprises encoding groups of data items by generating, for each group, header data comprising h-bits and a plurality of body portions each comprising b-bits and each body portion corresponding to a data item in the group. The value of h may be fixed for all groups and the value of b is fixed within a group, wherein the header data for a group comprises an indication of b for the body portions of that group. In various examples, b=0 and so there are no body portions. In examples where b is not equal to zero, a body data field is generated for each group by interleaving bits from the body portions corresponding to data items in the group. The resultant encoded data block, comprising the header data and, where present, the body data field can be written to memory.Type: GrantFiled: November 27, 2019Date of Patent: April 6, 2021Assignee: Imagination Technologies LimitedInventors: Simon Fenney, Greg Clark, Alan Vines
-
Publication number: 20200177200Abstract: A data compression method comprises encoding groups of data items by generating, for each group, header data comprising h-bits and a plurality of body portions each comprising b-bits and each body portion corresponding to a data item in the group. The value of h may be fixed for all groups and the value of b is fixed within a group, wherein the header data for a group comprises an indication of b for the body portions of that group. In various examples, b=0 and so there are no body portions. In examples where b is not equal to zero, a body data field is generated for each group by interleaving bits from the body portions corresponding to data items in the group. The resultant encoded data block, comprising the header data and, where present, the body data field can be written to memory.Type: ApplicationFiled: November 27, 2019Publication date: June 4, 2020Inventors: Simon Fenney, Greg Clark, Alan Vines