METHODS AND SYSTEMS FOR PERFORMING A SPARSE SUBMANIFOLD CONVOLUTION ON A GPU
Methods of implementing a sparse submanifold convolution on a graphics processing unit. The methods include: receiving, at the graphics processing unit, an input tensor in a dense format; identifying, at the graphics processing unit, active positions of the input tensor; performing, at the graphics processing unit, an indexed unfold operation on the input tensor based on the identified active positions to generate an input matrix comprising elements of the input tensor in each active window of the input tensor; and performing, at the graphics processing unit, a matrix multiplication between a weight matrix and the input matrix to generate an output matrix that comprises elements of an output tensor of the sparse submanifold convolution based on the active windows. The methods may further comprise performing, at the graphics processing unit, an indexed fold operation on the output matrix based on the active windows to generate an output tensor in a dense format.
This application claims foreign priority under 35 U.S.C. 119 from United Kingdom patent application Nos. 2303116.4, 2303117.2, 2303118.0, 2303119.8, and 2303120.6, all filed on 2 Mar. 2023, the contents of which are incorporated by reference herein in their entirety.
TECHNICAL FIELDThis application is directed to methods and systems for performing a sparse submanifold convolution on a graphics processing unit (GPU).
BACKGROUNDAs is known to those of skill in the art, a point cloud is a set of individual data points plotted in two-dimensional (2D) or three-dimensional (3D) space. For example, each point in a 3D point cloud may represent a measurement at a particular x, y and z location. A point cloud may be used to represent an object in space. Point clouds may by generated by a sensor, such as, but not limited to, a LiDAR scanner or a depth camera. As is known to those of skill in the art, a LIDAR scanner uses light in the form of a pulsed laser to measure distances. As point clouds do not typically have a point for each possible co-ordinate, point clouds are considered to be sparse datasets.
There are a wide range of real-world artificial intelligence applications which point clouds can be used in, such as augmented/virtual reality (e.g. layout detection for interior scenes) and autonomous driving (e.g. to extract the driveable regions). As a result, performing deep learning tasks on point clouds has received significant attention from both academia and industry and artificial neural networks (referred to herein simply as neural networks) have been developed to process point clouds, which may be referred to herein as a point cloud neural networks. As is known to those of skill in the art, a neural network comprises one or more interconnected layers that can be used for machine learning applications. In particular, a neural network can be used in signal processing applications, including, but not limited to, image processing and computer vision applications.
The data input to and output from a layer of a neural network can be described as a tensor. As is known to those of skill in the art, a tensor is a generalization of vectors and matrices and can be considered an n-dimensional array. A vector is a one-dimensional tensor, and a matrix is a two-dimensional tensor. The tensors in a neural network are often, but are not necessarily, four-dimensional. Reference is made to
The processing that is performed on the input tensor to a layer depends on the type of layer. For example, each layer of a neural network may be one of a plurality of different types. Common neural network layer types include, but are not limited to, a convolution layer, an activation layer, a normalisation layer, a pooling layer, a fully connected layer, and a batch normalisation layer. It will be evident to a person of skill in the art that these are only example neural network layer types and there may be other neural network layer types.
A convolution layer convolves the input tensor with weights associated with the layer Specifically, each convolution layer is associated with a plurality of weights w1 . . . wg, which may also be referred to as filter weights or coefficients. The weights are grouped to form one or more filters. Each filter is moved or slid across the input tensor in one or more dimensions in accordance with the stride in that dimension, and the dot-product of the input data and the weights of that filter is calculated at each filter location. The elements of the input tensor that are applied to the filter weights at a particular filter location are referred to as a window of the input tensor. There may be a bias per filter which is added to the result of the corresponding dot products.
There are many different types of convolution layers. Traditional neural networks often have one or more 2D or 3D convolution layers. In a 2D convolution (which may be referred to herein as a standard 2D convolution), each filter has a dimension KH×KW×Cin (i.e., each filter may comprise a set of KH×KW×Cin weights w) wherein Cin is the number of channels of the input tensor such that each filter generates a channel of the output. Each filter channel may be described as a kernel of size KH×KW. Accordingly, depending on the number of channels, a filter may comprise one or more kernels. Each filter is slid across the input tensor in steps sH and sW in the H and W dimensions respectively, which are referred to as the strides of the convolution.
Reference is now made to
Point cloud neural networks, however, may comprise one or more 2D or 3D sparse submanifold convolutions layers. A 2D/3D sparse submanifold convolution is the same as its corresponding standard 2D/3D convolution except that the output elements are only calculated for positions of the filter in which one or more predetermined elements of the filter kernel is/are aligned with an active position of the input tensor. An active position of the input tensor is a height and width position of the input tensor in which at least one channel of the input tensor has a non-zero value or element at that position. The one or more predetermined elements of the filter kernel may comprise the centre element of the filter kernel and/or one or more elements close to the centre of the filter kernel. In the examples described herein there is a single predetermined element of the filter kernel that is the centre element of the filter kernel. However, it will be evident to a person of skill in the art that this is an example only. A sparse submanifold convolution is designed to work on a sparse input tensor.
Reference is now made to
As sparse submanifold convolution layers are becoming more popular in neural networks it is important to be able to implement sparse submanifold convolutions in a hardware efficient manner (e.g., in a manner that requires less silicon area and/or less processing power).
The embodiments described below are provided by way of example only and are not limiting of implementations which solve any or all of the disadvantages of known methods and systems for implementing a sparse submanifold convolution.
SUMMARYThis Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Described herein are methods of implementing a sparse submanifold convolution on a graphics processing unit. The methods include: receiving, at the graphics processing unit, an input tensor in a dense format; identifying, at the graphics processing unit, active positions of the input tensor; performing, at the graphics processing unit, an indexed unfold operation on the input tensor based on the identified active positions to generate an input matrix comprising elements of the input tensor in each active window of the input tensor; and performing, at the graphics processing unit, a matrix multiplication between a weight matrix and the input matrix to generate an output matrix that comprises elements of an output tensor of the sparse submanifold convolution based on the active windows. The methods may further comprise performing, at the graphics processing unit, an indexed fold operation on the output matrix based on the active windows to generate an output tensor in a dense format.
A first aspect provides a method of implementing a sparse submanifold convolution on a graphics processing unit, the method comprises: receiving, at the graphics processing unit, an input tensor in a dense format; identifying, at the graphics processing unit, active positions of the input tensor; performing, at the graphics processing unit, an indexed unfold operation on the input tensor based on the identified active positions of the input tensor to generate an input matrix comprising elements of the input tensor in each active window of the input tensor; and performing, at the graphics processing unit, a matrix multiplication between a weight matrix and the input matrix to generate an output matrix that comprises elements of an output tensor of the sparse submanifold convolution based on the active windows.
The input tensor may have at least a height dimension, a width dimension and a channel dimension and an active position of the input tensor may be a height and width position in which at least one channel of the input tensor has a non-zero element.
Identifying the active positions of the input tensor may comprise: identifying a height, width, and channel position of each non-zero element in the input tensor; identifying unique height and width pairs from the identified height, width and channel positions; and identifying the unique height and width pairs as the active positions.
An active window of the input tensor may be a window of the input tensor, used to generate an element of the output tensor of the sparse submanifold convolution, wherein one or more predetermined positions within the window are active positions.
The one or more predetermined positions within a window may comprise a centre position of the window.
Performing the indexed unfold operation on the input tensor may comprise identifying, from the identified active positions and one or more parameters of the sparse submanifold convolution, the active windows of the input tensor.
Identifying, from the identified active positions and the one or more parameters of the sparse submanifold convolution, the active windows of the input tensor may comprise, for each identified active position: determining if the active position is at a predetermined position within a window of the input tensor for the sparse submanifold convolution; and in response to determining that the active position is at the predetermined position within a window of the input tensor for the sparse submanifold convolution, identifying the window as an active window.
The active windows of the input tensor may be based on a version of the input tensor that is padded such that when the sparse submanifold convolution has a stride of one in each sparse submanifold convolution dimension there is an active window for each active position, and identifying, from the identified active positions and the one or more parameters of the sparse submanifold convolution, the active windows of the input tensor may comprise identifying the active window corresponding to each active position.
Performing the indexed unfold operation on the input tensor may comprise identifying the elements of the input tensor in each active window.
Identifying the elements of the input tensor in an active window may comprise implementing a series of nested loops, the series of nested loops comprising a loop for each dimension of an active window that loops from a predetermined position within the window through the elements of the active window in that dimension.
Performing the indexed unfold operation on the input tensor may comprise storing the elements of each active window in the input matrix.
The method may further comprise receiving a zeroed input matrix, and the elements of each active window of the input tensor are stored in the received input matrix.
The method may further comprise performing, at the graphics processing unit, an indexed fold operation on the output matrix based on the active windows to generate an output tensor in a dense format.
There may be an active window for each active position of the input tensor, and performing the indexed fold operation on the output matrix may comprise placing each element in the output matrix at the corresponding active position in a channel of the output tensor.
Performing the indexed fold operation on the output matrix may comprise, for each active window, looping through each channel of the output tensor and placing the element of the output matrix corresponding to that active position and that channel at the active position of that channel in the output tensor.
Performing the indexed fold operation on the output matrix may comprise identifying a position in the output tensor of each element in the output matrix, based on the active windows and one or more parameters of the sparse submanifold convolution, and placing each element of the output matrix in the corresponding identified position in the output tensor.
The method may further comprise receiving a zeroed output tensor, and the elements of the output matrix may be stored in the received output tensor.
Performing the indexed fold operation on the output matrix may further comprise storing zeroes at each position of the output tensor that does not comprise an element of the output matrix.
The input matrix may comprise a column for each active window of the input tensor and each column of the input matrix comprises the elements of the input tensor in the corresponding active window.
The weight matrix may comprise a row for each filter to be applied to the input tensor and each row of the weight matrix may comprise all weights forming the corresponding filter.
A second aspect provides a graphics processing unit configured to perform the method of the first aspect.
The graphics processing unit may be embodied in hardware on an integrated circuit.
A third aspect provides a method of implementing a sparse submanifold convolution using a neural network accelerator, the method comprising: receiving, at the neural network accelerator, an input tensor in a sparse format; performing at the neural network accelerator, for each position of a kernel of the sparse submanifold convolution, a 1x1 convolution between the received input tensor and weights of filters of the sparse submanifold convolution at that kernel position to generate a plurality of partial outputs; and combining appropriate partial outputs of the plurality of partial outputs to generate an output tensor of the sparse submanifold convolution in sparse format.
The input tensor in sparse format may comprise, for each active position of a corresponding input tensor in dense format, an element for each channel of the input tensor in dense format.
The input tensor in dense format may have at least a height dimension, a width dimension and a channel dimension and an active position of the input tensor in dense format may be a height and width position in which at least one channel of the input tensor in dense format has a non-zero element.
The neural network accelerator may comprise a convolution accelerator configured to accelerate convolution operations, and the 1x1 convolutions are performed by the convolution accelerator.
Each filter of the sparse submanifold convolution may comprise one or more kernels of size KH×KW, where KH is a height of the kernel and KW is a width of the kernel, and each kernel comprises KH×KW weights, each at a different position of the kernel.
Each element of the output tensor in sparse format may be representable as a sum of one or more partial outputs of the plurality of partial outputs.
Combining the appropriate partial outputs of the plurality of partial outputs to generate the output tensor of the sparse submanifold convolution in sparse format may comprise: performing a matrix multiplication between each channel of each 1x1 convolution output tensor and a scatter matrix for the 1x1 convolution to group the partial outputs of the plurality of partial outputs that are relevant to each element of the output tensor in sparse format; and combining, via one or more addition operations, the grouped partial outputs.
The neural network accelerator may comprise an element-wise operations accelerator configured to accelerate performing element-wise operations on an input tensor, and the combining, via addition operations, may be performed by the element-wise operations accelerator.
The neural network accelerator may comprise a convolution accelerator configured to accelerate convolution operations and the matrix multiplications may be performed using the convolution accelerator.
The scatter matrix for a 1x1 convolution may identify partial outputs of an output tensor of that 1x1 convolution that are relevant to an element of the output tensor and, for each relevant partial output, identifies which element of the output tensor the partial output is relevant to.
The scatter matrix for a 1x1 convolution may be configured such that a result of multiplying that scatter matrix with a channel of an output tensor of that 1x1 convolution is a matrix that (i) comprises only the partial outputs of that channel relevant to an element of the output tensor and (ii) identifies which element of the output tensor each of those partial outputs is relevant to.
The scatter matrix for a 1x1 convolution may be configured such that a result of multiplying that scatter matrix with a channel of an output tensor of that 1x1 convolution is a matrix that comprises only the partial outputs of that channel relevant to an element of the output tensor and each relevant partial output is in a row or a column of the matrix that corresponds to the element of the output tensor that the partial output is relevant to.
A scatter matrix for a 1x1 convolution may comprise a row for each active position of the input tensor and a row for each active position of the output tensor, and the scatter matrix may comprise a first predetermined value in row B, column A if a partial output in row A of a channel of the 1x1 convolution output tensor is relevant to active position B of the output tensor and a second, different, predetermined value otherwise.
The first predetermined value may be one and the second predetermined value may be zero.
The method may further comprise generating the scatter matrices at a component external to the neural network accelerator based on active positions of the input tensor in dense format and one or more parameters of the sparse submanifold convolution.
Combining the appropriate partial outputs of the plurality of partial outputs to generate the output tensor of the sparse submanifold convolution in the sparse format may comprise performing a scatter-add operation on output tensors of the 1x1 convolutions.
Performing the scatter-add operation on the output tensors of the 1x1 convolutions may comprise: receiving information identifying which partial outputs of the plurality of partial outputs are relevant to each element of the output tensor in dense format; retrieving the partial outputs relevant to each element of the output tensor in dense format from the output tensors of the 1x1 convolutions; and combining the retrieved partial elements relevant to each element of the output tensor to generate that output element.
The neural network accelerator may comprise a processor and the scatter-add operation may be performed by the processor of the neural network accelerator.
The method may further comprise generating, at a component external to the neural network accelerator, the information identifying which partial outputs of the plurality of partial outputs are relevant to each element of the output tensor in dense format.
Each position of the kernel may be identified by a width coordinate and a height coordinate.
The output tensor of the sparse submanifold convolution in sparse format may comprise, for each active position of an output tensor in dense format, an element for each channel of the output tensor.
A fourth aspect provides a computer system comprising a neural network accelerator, the computer system configured to implement the method of the third aspect.
The neural network accelerator may be embodied in hardware on an integrated circuit.
A fifth aspect provides a computer readable medium having stored thereon computer readable code configured to cause a computer system comprising a neural network accelerator to perform the method of the third aspect when the code is run.
The method of the first aspect or the third aspect may be implemented as part of processing data in accordance with a neural network to perform a signal processing task.
The input tensor of the first aspect or the third aspect may comprise image data such that performing the sparse submanifold convolution comprises a method of processing image data.
The image data may comprise a point cloud data set generated by an image sensor.
The sparse submanifold convolution of the first aspect or the third aspect may be a 3D sparse submanifold convolution and the input tensor may comprise a 3D point cloud data set.
A sixth aspect provides a method of implementing a standard convolution on a central processing unit, the method comprising: receiving, at the central processing unit, an input tensor in a dense format; performing, at the central processing unit, an indexed unfold operation on the input tensor based on the identified active positions of the input tensor to generate an input matrix comprising elements of the input tensor in each non-zero window of the input tensor; and performing, at the central processing unit, a matrix multiplication between a weight matrix and the input matrix to generate an output matrix that comprises elements of an output tensor of the standard convolution based on the non-zero windows of the input tensor.
A seventh aspect provides a method of implementing a sequence of two sparse submanifold convolutions, the method comprising: receiving, at a graphics processing unit, an input tensor, in a dense format, to a first sparse submanifold convolution of the sequence; identifying, at the graphics processing unit, active positions of the input tensor; performing, at the graphics processing unit, an indexed unfold operation on the input tensor based on the identified active positions to generate an input matrix comprising elements of the input tensor in each active window of the input tensor; performing, at the graphics processing unit, a matrix multiplication between a weight matrix and the input matrix to generate an output matrix that comprises elements of an output tensor of the first sparse submanifold convolution based on the active windows; providing the output matrix to a neural network accelerator as an input tensor, in a sparse format, to a second sparse submanifold convolution of the sequence; performing, at the neural network accelerator, for each position of a kernel of the second sparse submanifold convolution, a 1x1 convolution between the received input matrix and weights of filters of the sparse submanifold convolution at that kernel position to generate a plurality of partial outputs; and combining appropriate partial outputs of the plurality of partial outputs to generate an output tensor, in sparse format, of the second sparse submanifold convolution.
The neural network accelerators, convolution processing units, convolution engines, and graphics processing units described herein may be embodied in hardware on an integrated circuit. There may be provided a method of manufacturing, at an integrated circuit manufacturing system, a neural network accelerator, convolution processing unit, convolution engine or graphics processing unit described herein. There may be provided an integrated circuit definition dataset that, when processed in an integrated circuit manufacturing system, configures the system to manufacture an integrated circuit that embodies a neural network accelerator, convolution processing unit, convolution engine or graphics processing unit described herein. There may be provided a non-transitory computer readable storage medium having stored thereon a computer readable description of a neural network accelerator, convolution processing unit, convolution engine or graphics processing unit that, when processed in an integrated circuit manufacturing system, causes the integrated circuit manufacturing system to manufacture an integrated circuit embodying the neural network accelerator, convolution processing unit, convolution engine or graphics processing unit.
There may be provided an integrated circuit manufacturing system comprising: a non-transitory computer readable storage medium having stored thereon a computer readable description of a neural network accelerator, convolution processing unit, convolution engine or graphics processing unit described herein; a layout processing system configured to process the computer readable description so as to generate a circuit layout description of an integrated circuit embodying the neural network accelerator, convolution processing unit, convolution engine or graphics processing unit; and an integrated circuit generation system configured to manufacture an integrated circuit embodying the neural network accelerator, convolution processing unit, convolution engine or graphics processing unit according to the circuit layout description.
There may be provided computer program code for performing any of the methods described herein. There may be provided non-transitory computer readable storage medium having stored thereon computer readable instructions that, when executed at a computer system, cause the computer system to perform any of the methods described herein.
The above features may be combined as appropriate, as would be apparent to a skilled person, and may be combined with any of the aspects of the examples described herein.
Examples will now be described in detail with reference to the accompanying drawings in which:
The accompanying drawings illustrate various examples. The skilled person will appreciate that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the drawings represent one example of the boundaries. It may be that in some examples, one element may be designed as multiple elements or that multiple elements may be designed as one element. Common reference numerals are used throughout the figures, where appropriate, to indicate similar features.
DETAILED DESCRIPTIONThe following description is presented by way of example to enable a person skilled in the art to make and use the invention. The present invention is not limited to the embodiments described herein and various modifications to the disclosed embodiments will be apparent to those skilled in the art.
Embodiments will now be described by way of example only.
As described above, as sparse submanifold convolution layers are becoming more popular in neural networks it is important to be able to implement sparse submanifold convolutions in a hardware efficient manner (e.g., in a manner that requires less silicon area or less processing power).
Methods which are known to the Applicant, which is not an admission that they are known outside of the Applicant company or that they are well-known, for implementing a sparse submanifold convolution on a GPU, such as those implemented by TorchSparse and SpConv, include performing a gather operation, a matrix multiply (MatMul) operation, and a scatter operation. Specifically, in GPU implementations: (1) the active points in the input tensor are determined; (2) a HashMap is then built to store the information of which (active) points need to be multiplied by each weight; (3) a gather operation is then used to generate, from the HashMap, an input data matrix (or a set of input data matrices) and a parameter/weight matrix (or a set of parameter matrices) so that each (active) point is multiplied by the relevant weight; (4) a matrix multiplication is performed between the input matrix/matrices and the parameter matrix/matrices to generate partial results; and (5) a scatter-add operation is then performed, based on the HashMap, to combine the partial results to generate the final outputs and put them in the correct location in the output tensor. However, the gather and scatter operations can be time and resource intensive to implement.
Accordingly, the inventors have developed methods for implementing a sparse submanifold convolution in a more hardware efficient manner, compared to the methods known to the Applicant, using a GPU and/or a neural network accelerator (NNA) which take advantage of the hardware features of GPUs and NNAs respectively. Specifically, the methods described herein for implementing a submanifold convolution are particularly adapted to take into consideration the internal functioning of a GPU and/or an NNA. In particular, as described in more detail below, the GPU-based methods are designed to take advantage of the parallel architecture (e.g. SIMD architecture) of GPUs and the NNA-based methods are designed to take advantage of the architecture of NNAs that enable convolution operations to be implemented in a hardware efficient (in terms of time and resources) manner.
GPU Implementations of Sparse Submanifold ConvolutionThe inventors have determined that a sparse submanifold convolution can be performed efficiently on a GPU by performing an indexed unfold operation on an input tensor in dense format to generate an input matrix of active windows of the input tensor, performing a matrix multiplication between a weight matrix and the input matrix, and, optionally, performing an indexed fold operation on the output of the matrix multiplication to generate an output tensor in dense format. GPUs are often implemented using a single instruction multiple data (SIMD) architecture. The inventors have determined that indexed unfold operations, matrix multiplications, and indexed fold operations can all be performed very efficiently in a parallelised manner using a SIMD architecture allowing a sparse submanifold convolution to be performed in a very hardware and resource efficient manner using the specific hardware structure of a GPU in the manner described herein.
Reference is now made to
At block 504, the GPU identifies the active locations or positions in the received input tensor. As noted above, an active location is a height and width position or location of the input tensor in which at least one channel of the input tensor has a non-zero value. Each active location or position may be identified with a set of indices-a height or row index, and a width or column index.
The active locations or positions in the received input tensor may be identified using any suitable technique. For example, there are available software modules, such as the nonzero function in PyTorch®, which can be run efficiently on a GPU, which, when provided with an input tensor will return a list of non-zero co-ordinates of that input tensor. This list will include the co-ordinates of every non-zero element in the input tensor. This means that if there are two channels that have non-zero elements at the same height and width position, then that height and width position will be identified twice, once for each channel. Since an active location is a height and width position in which at least one channel has a non-zero value, the list of co-ordinates generated by the nonzero function may be further processed to identify only the unique height and width positions or co-ordinates in the list. In other words, the list of the co-ordinates generated by the nonzero function may be further processed to eliminate duplicate height and width positions or co-ordinates. In another example, the active locations or positions in the received input tensor may be identified by first using SpConv's from_dense function (which relies on the PyTorch®'s to_sparse function) to generate a tensor in sparse format, in coordinate format, from the input tensor in dense format and then identifying the active locations from the tensor in sparse format.
In some cases, the output of this block 504 may be a list of the height and width (or column and row) co-ordinates or indices of the active positions. For example,
Once the active locations or positions in the received input tensor have been identified, the method 500 proceeds to block 506.
At block 506, the GPU performs an indexed unfold operation (which may also be referred to as a sparse unfold operation) on the received input tensor based on the active locations or positions identified in block 504 to generate an input matrix that comprises the elements of the active windows of the input tensor. In a standard unfold operation (which may also be referred to as an image to column (im2col) operation or a default unfold operation), the elements of the input tensor in each window are placed in a column of the input matrix. As described above, each window generates one output element per channel of the output tensor. Accordingly, if there are P elements in each channel of the output tensor (meaning there are P windows of the input tensor), there will be P columns in the input matrix of a standard unfold operation. An unfold operation can also be described as a flattening operation. A standard unfold operation will now be explained via an example.
Reference is now made to
Although in the example of
However, as explained above, in a sparse submanifold convolution, elements of the output tensor are only generated for windows of the input tensor in which one or more predetermined elements (the centre element in this example) of the window is/are active (which may be referred to herein as the active windows). Therefore, only the columns of the input matrix 702 corresponding to active windows are used in a sparse submanifold convolution. As a result, there are lot of columns in an input matrix generated by a standard unfold operation that are not required for a sparse submanifold convolution.
Accordingly, the inventors have developed an indexed unfold operation in which the input tensor only comprises a column for each active window, wherein the active windows are identified from the active locations or positions identified in block 504. Specifically, the indexed unfold operation uses the active locations or positions, and the sparse submanifold convolution parameters (e.g. stride, dilation, kernel size) to determine the active windows of the input tensor, and then generates an input matrix with a column for each active window, wherein each column comprises the elements in the corresponding active window. For example, as shown in
In some cases, padding may be applied to the input tensor so that any active point on an edge (i.e. in the first/last column or first/last row) of the input tensor can be the centre of a window. The number of rows and columns of padding that are added are based on the size of the kernel. For example, where a filter has a kernel with a height KH then └KH/2┘ rows of padding may be added to each of the top and the bottom of the input tensor; and where a filter has a kernel with a width Kw then └KH/2┘ columns of padding may be added to each of the left and the right of the input tensor. For example, as shown in
Using an indexed unfold operation, as opposed to a standard unfold operation, to generate an input matrix can significantly reduce the size of the input matrix which can significantly reduce the computations to implement the matrix multiplication of block 508 (described below). Table 1 illustrates the size of the input matrix when generated from an indexed unfold operation compared to the size of the input matrix when generated from a standard unfold operation for an example input tensor of size [1, 1, 1000, 1000] (i.e. one batch, one channel, a height of 1000 and a width of 1000) for a sparse submanifold convolution with a 3x3 kernel, strides and dilations of 1 and no padding, for different levels of sparsity. Generally the sparser the input tensor, the smaller the input matrix generated by an indexed unfold operation and the more memory efficient the method described with respect to
The indexed unfold operation may be implemented on the GPU in any suitable manner. In one example, where there is an active window per active position, the indexed unfold operation may be implemented by indexing each active location (e.g. from 0 to the number of active locations) and, for each active position/location, identifying the surrounding positions/locations (i.e. the positions/locations in the window of the input tensor centred at the active position) using multiple nested loops, one for each dimension of a window. For example, where the windows are 2D windows with a height dimension and a width dimension, there may be one loop for the height dimension and another for the width dimension. Specifically, for a 2D convolution with 2D windows, an offset may be created for each of the height and width dimensions that is equal to
where k is the size of the kernel in that dimension, and then the GPU is configured to loop from −offset to +offset from the respective active location. Where the windows are 3D with a channel dimension, there may be an additional loop that loops through the channels. The element at each identified position is then copied into the appropriate position of the input matrix. As described above, each column in the input matrix may correspond to one active window. The values at the offset positions, starting from top left, may be ordered from top down. This may be implemented by having a separate thread for each active location/position. Since the same operations are performed for each active position, this can be efficiently implemented by a SIMD architecture—e.g. applying a single instruction (or set of instructions) to multiple pieces of input data (i.e., multiple active positions).
Where there are less active windows than there are active positions due to the stride, dilation, or other sparse submanifold convolution parameters, then the indexed unfold operation may be implemented on the GPU by, prior to identifying the positions/locations surrounding the active position (e.g. using the nested loops), performing a validity check to determine if the active position produces or induces an active window, and only identifying the surrounding positions/locations and adding the elements from the identified positions to the input matrix if the active position induces an active window.
In some cases, in addition to receiving the input tensor in dense format, and the active positions, the indexed unfold operation may also receive a zeroed input matrix with the desired shape (e.g. height=elements per kernel×number of channels, width=number of active windows) and the GPU may be configured to write the active window elements to the appropriate location in the received input matrix.
Once the GPU has performed an indexed unfold operation on the input tensor to generate an input matrix with the elements of each active window of the input tensor, the method 500 proceeds to block 508.
At block 508, the GPU performs a matrix multiplication operation (which may be referred to as a MatMul operation) between a weight matrix and the input matrix generated in block 506 to generate an output matrix.
The matrix multiplication between the weight matrix 1002 and the input matrix 802 generates an output matrix 1004 which comprises, for each active window, an output element for each channel of the output tensor (i.e., for each filter of the sparse submanifold convolution). The output matrix 1004 may have a column for each active window and a row for each output channel such that each column comprises an output element for the corresponding active window for each output channel. In the example shown in
Once the GPU has performed the matrix multiplication the method 500 may end or the method 500 may proceed to block 510. Specifically, the output matrix 1004 generated in block 508 comprises all of the non-zero elements of the output tensor (in other words the output matrix corresponds to the output tensor in sparse format) and thus the output matrix 1004 may be simply output, or an output tensor in dense format may be first generated from the output matrix 1004. When the method is performed as part of a neural network, whether or not the output matrix is converted to an output tensor in dense format may depend on what input tensor format is expected by the next layer in the neural network.
At block 510, the GPU performs an indexed fold operation on the output matrix generated in block 508 to generate an output tensor 1102 in dense format. The indexed fold operation is the opposite of the indexed unfold operation performed in block 506. Specifically, the indexed fold operation uses the active windows to generate an output tensor in dense format (e.g. a densified output tensor) from the output matrix 1004 generated in block 508. In other words, the indexed fold operation generates an output tensor with the elements of the output matrix 1004 in the correct position of the output tensor in dense format and zeros elsewhere.
A standard fold operation (which may also be referred to as a column to image (col2im) operation) receives an output matrix in dense format—i.e. an output matrix that comprises a row per output channel with an output element for each element of that channel—and converts each row of the received matrix to a plane of the output tensor in accordance with the size of a channel of the output tensor. For example, if each channel of the output tensor is of size 5x5, then a standard fold operation places every five elements in the same row of the received matrix in a different row of the corresponding channel. For example, the first five elements in the first row of the received matrix would be placed in the first row of the first channel of the output tensor, the next five elements in the first row of the received matrix would be placed in the second row of the first channel and so on; the first five elements in the second row of the received matrix would be placed in the first row of the second channel of the output tensor, the next five elements in the second row of the received matrix would be placed in the second row of the second channel and so on.
In contrast to a standard fold operation, an indexed fold operation receives an output matrix in sparse format—i.e., the received matrix does not comprise an element for every element of the output tensor in dense format. An indexed fold operation also receives information indicating the position in the output tensor corresponding to each active window of the input tensor (this may be information (e.g. indices) identifying the active positions identified in block 504; information specifically identifying the active windows (e.g. indices of an element of each active window) (which may be generated as part of block 506); or information specifically identifying the position in the output tensor corresponding to each active window (which may be generated as part of block 506)). The information indicating the position in the output tensor corresponding to each active window is then used to place the elements of the sparse output matrix in the correct location of the output tensor in dense format. The elements at all other positions may then be set to zero.
Where there is an active window per active position (e.g. because the input is padded and the stride is 1 in all directions), then the height and width of the input tensor will be the same as the height and the width of the output tensor, and the elements in the sparse output matrix 1004 can be simply placed in the same location in the output tensor as the corresponding active position. For example, in such cases, if an active window is centred at active position (1,1) then the output elements based on that active window will be placed at position (1,1) of the corresponding output channel. If, however, there is not an active window per active position and the received information does not explicitly identify the output position corresponding to each active window, the location of each element in the output matrix 1004 is determined from the received information and/or the parameters of the sparse submanifold convolution. For example, active windows can be determined from the active locations and the parameters of the sparse submanifold convolution (e.g. filter dimensions, strides, dilations), and the position in the output tensor corresponding to each active window can be determined from the active window and the parameters of the sparse submanifold convolution.
The indexed fold operation may be implemented on the GPU in any suitable manner. In one example, where there is an active window per active position of the input tensor, an indexed fold operation may be implemented on the GPU by, for example, creating an index for each active position/location (e.g. from 0 to n where there are n+1 active locations in the input tensor). Then, for each active position/location, the GPU may be configured to determine the associated location in the output tensor from the location of the corresponding active position and the kernel parameters, then loop through each output channel and place or copy the appropriate element of the output matrix in the determined location in the corresponding channel of the output tensor. Each active location may get its own thread.
Where there are less active windows than active positions, due to, for example, the stride, dilation, or other convolution parameters, then the indexed fold operation may be implemented on the GPU by, prior to determining the associated location in the output tensor for an active position, performing a validity check on the active position to determine if the active position produces or induces an active window, and only determining the associated location in the output tensor and performing the copying if the active position induces an active window. For example, if the stride is two for a sparse submanifold convolution, then the validity check may comprise determining if the active position is at or on an odd index, and if it is determined that the active position is at or on an odd index determining that the active position does not induce an active window.
In some cases, in addition to receiving the sparse submanifold convolution parameters, information identifying the active windows, and the output matrix generated in block 508, the indexed fold operation may also receive a zeroed output tensor of the appropriate dimensions and may write the elements of the received matrix to the received output tensor. In such cases, a zero may not have to be explicitly written to the positions of the output tensor that do not comprise an element of the output matrix.
Once the indexed fold operation has been performed, the method 500 may end.
In the examples described above, a row of the weight matrix corresponds to a filter (i.e., comprises all of the weights of a filter) and a column of the input matrix corresponds to a window of the input tensor (i.e., comprises all of the elements in the window of the input tensor), and the weight matrix is multiplied with the input matrix, such that the dot product of a row of the weight matrix and a column of the input matrix is calculated, to obtain the output matrix. However, it will be evident to a person of skill in the art that this is an example only and in other examples, the rows and columns of the weight matrix and the input matrix may be reversed (in other words the weight matrix and the input matrix may be transposed) and the transposed input matrix may be multiplied with the transposed weight matrix to obtain the transpose of the output matrix.
In some cases, the input matrix may be transposed after the indexed unfold operation, and the output matrix may be transposed prior to the indexed fold operation. However the weight matrix may be transposed offline and simply provided to the GPU as an input. In some cases, where a sequence of two or more sparse submanifold convolutions are performed only the input matrix of the first sparse submanifold convolution may be transposed and only the output matrix of the last sparse submanifold convolution is transposed. Specifically, the middle convolutions are simply performed with transposed matrices.
As described in more detail below, testing has shown that the described method of implementing a sparse submanifold convolution allows a sparse submanifold convolution to be implemented more efficiently in terms of computing time and resources than known GPU-based methods.
Test ResultsReference is now made to
It can be seen from
Performing forward and backward passes of a neural network is often expensive to implement on a CPU or GPU in terms of computations, bandwidth and power. Accordingly, neural network accelerators (NNAs) have been developed that allow neural networks to be implemented in a hardware efficient manner (e.g., in a manner that requires less silicon area or less processing power).
An NNA is hardware that is designed to accelerate the processing of a neural network. As is known to those of skill in the art, a hardware accelerator is hardware designed to perform a specific set of one or more functions more efficiently than a general processing unit, such as a central processing unit (CPU). Accordingly, in contrast to a general CPU which can be configured to perform any number of functions, an accelerator can only perform a limited set of one or more functions. NNAs have one or more network processing hardware units (which may simply be referred to as processing units) which are each designed to accelerate one or more neural network operations. Therefore a graphics processing unit (GPU) with one or more network processing hardware units designed to accelerate one or more neural network operations can be understood to be an NNA itself or can be understood to comprise an NNA. A neural network operation is defined herein as an operation that is used to implement all or a part of a neural network layer. A neural network layer may be implemented by one or more neural network operations. Example neural network operations include, but are not limited to, convolution operations, non-linear operations, pooling operations and normalisation operations.
An NNA may therefore have, for example, a convolution processing unit which is configured to accelerate convolution operations, an activation processing unit which is configured to accelerate non-linear operations, a pooling processing unit which is configured to accelerate pooling operations, and/or a normalisation processing unit configured to accelerate normalisation operations. It will be evident to a person of skill in the art that this is just an example set of network processing hardware units that an NNA may have, and NNAs may have additional network processing hardware units, fewer network processing hardware units or a different combination of network processing hardware units. An example NNA is described below with reference to
The inventors have determined that a sparse submanifold convolution can be performed efficiently using an NNA by performing, for each kernel location, a 1x1 convolution between the input tensor in sparse format and the weight(s) at that kernel location to generate partial outputs and then combining the appropriate partial outputs to generate the final output elements. The combining of the partial outputs generated by the 1x1 convolutions may be implemented in a number of different ways. In some cases, where, for example, the NNA comprises a hardware component that can perform a scatter-add operation, the combining of the partial outputs may be implemented by performing a scatter-add operation on the partial outputs using that hardware component. In other cases, where, for example the NNA does not have a hardware component that can perform a scatter-add operation, the combining may be implemented by performing a matrix multiplication between the output of each 1x1 convolution and a corresponding scatter matrix, and combining the results of the matrix multiplications. These methods take advantage of the hardware structure of NNAs that allows convolution operations to be performed in a hardware efficient manner.
Reference is now made to
At block 1304, the NNA performs, for each position or location within the kernel of the filter(s) of the sparse submanifold convolution, a 1x1 convolution on the received input tensor using the weight(s) at that kernel position or location. As described above, an NNA often comprises hardware, such as a convolution processing unit (i.e., a convolution hardware accelerator), to accelerate convolution operations. Accordingly, the NNA may efficiently perform the 1x1 convolutions using such hardware.
As described above, a sparse submanifold convolution applies one or more filters of weights to active windows of an input tensor. Each filter of a sparse submanifold convolution may be referred to herein as a sparse submanifold filter. Each sparse submanifold filter comprises one or more kernels of size KH×KW, where KH is the height of the kernel (i.e., the number of weights in the height or y dimension) and KW is the width of the kernel (i.e., the number of weights in the width or x dimension). Each kernel thus comprises KH×KW weights, each at a different position or location within the kernel.
For each position within a kernel of a sparse submanifold filter, the weight(s) at that position are extracted to form a sub-filter. For example, the weights at position (−1,−1) form one sub-filter, the weights at position (−1,0) form another sub-filter and so on.
A 1x1 convolution is then performed on the received input tensor in dense format for each kernel position, using the sub-filter(s) that correspond to that position (i.e. using the weights at that kernel position). The term “1x1” convolution is used herein to mean a convolution with a kernel that is of size 1x1 (i.e. KH=KW=1). For example, if the sparse submanifold convolution is to apply the filter 404 of
Each 1x1 convolution generates a tensor 1620, 1622, 1624, 1626, 1628, 1630, 1632, 1634, 1636 that comprises partial outputs that can be combined to generate the elements of the output tensor. Specifically, each element of the output tensor can be expressed as the sum of KH×KW dot products, where each dot product is the dot product of the weights at one kernel location and the elements of the input tensor at the corresponding location in a window of the input tensor. Where the corresponding location in a window is not active then that dot-product will be zero and can be ignored. The 1x1 convolution outputs will produce a dot product for each kernel position for each active position, thus any element in the output can be generated by combining the partial outputs generated by the 1x1 convolutions.
For ease of illustration, in the example of
Once the 1x1 convolutions have been performed the method 1300 proceeds to block 1306.
At block 1306, the NNA combines the appropriate partial outputs generated in block 1306 to generate the active elements of the output tensor (i.e. the elements of the output tensor at the active positions).
The appropriate partial outputs that are to be combined can be determined from the active positions of the input tensor and the sparse submanifold convolution parameters (e.g. kernel size, strides, dilation). For example, for the example sparse submanifold convolution of
The element or value q2 at the second active position in the first (and only) channel of the output tensor 412 is equal to the dot product of the second active window 408 of the input tensor 402 and the filter 404, and thus will be equal to p1*w(−1,−1)+p2*w(0,0)+p3*w(1,0). It can be seen from
Finally, the element or value q3 at the third active position in the first (and only) channel of the output tensor is equal to the dot product of the third active window 410 of the input tensor 402 and the filter 404, and thus will be equal to p2*w(−1,0)+p3*w(0,0). It can be seen from
The combining of the appropriate partial outputs generated at block 1304 may be implemented in number of different ways. In some cases, the combining of the appropriate partial outputs may be implemented by the NNA by (i) performing a matrix multiplication between each channel of each 1x1 convolution output tensor and a corresponding scatter matrix to group or align the partial outputs that are relevant to each output element; and (ii) combining, via addition operations, the grouped or aligned partial outputs. As described above, an NNA often comprises hardware, such as a convolution processing unit (i.e., a convolution hardware accelerator), to accelerate convolution operations. Hardware that is efficient at performing convolution operations can also efficiently perform matrix multiplications. Accordingly, the NNA may efficiently perform the matrix multiplications using such hardware. An NNA may also comprise hardware that is configured to accelerate performing per-element operations, such as, but not limited to, addition and multiplication, on an input tensor using another tensor (which may be referred to as an element-wise operations processing unit). Such hardware may be able to efficiently (in terms of processing resources and time) perform the addition operations.
In these cases, there may be a scatter matrix for each 1x1 convolution that identifies the elements of the output tensor of that 1x1 convolution that are relevant to an element of the output tensor and if relevant, identifies which element of the output tensor it is relevant to. The scatter matrix for a 1x1 convolution is configured such that when it is applied to a channel of the output tensor of that 1x1 convolution the result is a matrix that comprises only the relevant partial outputs and an indication of which element of the output tensor each of those partial outputs is relevant to. In some cases, the scatter matrix may comprise only ones and zeros. In some cases, the scatter matrix may be configured such that when it is multiplied with a channel of the output tensor of a 1x1 convolution the result is a matrix that only comprises the partial outputs that are relevant to an element of the output tensor, and each relevant partial output is in a row (or column) of the matrix that corresponds to the corresponding element of the output tensor.
Where, as shown in
Reference is now made to
As described above with respect to Table 3, the first partial output (q1−1) of the first 1x1 convolution (i.e. the 1x1 convolution related to kernel position (−1,−1)) is relevant to the second active element (q2) of the final output tensor. Accordingly, the scatter matrix 1702 for the first 1x1 convolution has a ‘1’ at position (1, 0) (i.e. second row, first column). None of the other partial outputs of the first 1x1 convolution are relevant to an active element of the final output tensor, so the remaining elements of the scatter matrix 1702 are set to ‘0’. Multiplying this scatter matrix 1702 with the output of the first 1x1 convolution 1620 results in a matrix 1722 with the first partial output (q1−1) in the second row. All other elements of the output matrix 1722 are zero. This indicates that the first partial output is relevant to an active position of the final output tensor, and specifically, to the second active position of the final output tensor.
Similarly, since the second partial output (q2−2) of the second 1x1 convolution (i.e. the 1x1 convolution related to kernel position (−1,0)) is relevant to the third active element (q3) of the final output tensor, the scatter matrix 1704 for the second 1x1 convolution has a ‘1’ at position (2, 1) (i.e. the third row, second column). Multiplying this scatter matrix 1704 with the output of the second 1x1 convolution 1622 results in a matrix 1724 with the second partial output (q2−2) in the third row. All other elements of the output matrix 1724 are zero. This indicates that the second partial output is relevant to an active position of the final output tensor, and specifically, to the third active position of the final output tensor.
Since the first, second and third partial outputs of the fifth 1x1 convolution (i.e. the 1x1 convolution related to kernel position (0,0)) are relevant to the first, second and third active elements (q1, q2, q3) of the final output tensor respectively, the scatter matrix 1706 for the fifth 1x1 convolution has a ‘1’ at positions (0, 0) (i.e. the first row, first column), (1, 1) (i.e. the second row, second column), and (2,2) (i.e. third row, third column). Multiplying this scatter matrix 1706 with the output of the fifth 1x1 convolution 1628 results in a matrix 1726 with the first, second, and third partial outputs in the first, second and third rows respectively. This indicates that the first, second and third partial outputs are relevant to an active position of the final output tensor, and specifically to the first, second and third active positions respectively.
Since the third partial output (q3−8) of the eighth 1x1 convolution (i.e. the 1x1 convolution related to kernel position (1,0)) is relevant to the second active element (q2) of the final output tensor, the scatter matrix 1708 for the eighth 1x1 convolution has a ‘1’ at position (1, 2) (i.e. the second row, third column). Multiplying this scatter matrix 1708 with the output of the eighth 1x1 convolution 1634 results in a matrix 1728 with the third partial output (q3−8) in the second row. All other elements of the output matrix 1728 are zero. This indicates that the third partial output is relevant to an active position of the final output tensor, and specifically, to the second active position of the final output tensor.
Finally, since the second partial output (q2−9) of the ninth 1x1 convolution (i.e. the 1x1 convolution related to kernel position (1,1)) is relevant to the first active element (q1) of the final output tensor, the scatter matrix 1710 for the ninth 1x1 convolution has a ‘1’ at position (0, 1) (i.e. the first row, second column). Multiplying this scatter matrix 1710 with the output of the ninth 1x1 convolution 1636 results in a matrix 1730 with the second partial output (q2−9) in the first row. All other elements of the output matrix 1730 are zero. This indicates that the second partial output is relevant to an active position of the final output tensor, and specifically, to the first active position of the final output tensor.
None of the other 1x1 convolutions in this example produce partial outputs that are relevant to an active position of the final output tensor thus the scatter matrices for each of these 1x1 convolutions (not shown) are all zeros.
It can be seen that multiplying the scatter matrices and the 1x1 convolution outputs identifies the relevant partial outputs and groups the partial outputs that are relevant to each output element together—i.e., all of the partial elements that are relevant to the ith output element are placed in the ith row of the output of the matrix multiplication.
The scatter matrices may be generated offline (e.g. by a component external to the NNA) such as a CPU or a GPU and provided to the NNA as an input. For example, a CPU or a GPU may be configured to identify from the input tensor in dense format the active locations (e.g. using a nonzero PyTorch® function). The CPU or GPU may then be configured to generate a HashTable from the identified active locations and the sparse submanifold convolution parameters (e.g. kernel size, strides, dilation) which stores information that indicates which active position of the input tensor needs to be multiplied with which kernel position or offset and which active position of the output tensor it is relevant to. An example HashMap is shown below in Table 4. The example HashMap is similar to what is shown in Table 3. A HashMap can be efficiently generated on GPU in a parallel manner by using well-known data structures in literature.
In this example, once the matrix multiplications between the scatter matrices and the corresponding 1x1 convolution outputs have been completed, the partial outputs that have been identified as being relevant to each of the output elements are combined to generate the final output elements. This may be implemented through one or more tensor addition operations. In a tensor addition operation each element of a first tensor is added to the corresponding element of a second tensor. For example, as shown in
In the example shown in
Reference is now made to
Since the kernel is 3x3 there are nine kernel positions, thus nine 1x1 convolutions are performed on the input tensor in sparse format-one for each kernel position. Since there are 256 filters, each 1x1 convolution receives a [256, 128, 1, 1] weight tensor of the form [output channels, input channels, kernel height, kernel width]. In other words, the weight tensor comprises 256 filters of size 128x1x1. The output of each 1x1 convolution is thus a [1, 256, 1, 1000] tensor of partial outputs—i.e. the output tensor for each 1x1 convolution has 256 channels with 1000 partial outputs per channel.
A matrix multiplication is performed on the output of each 1x1 convolution in which a corresponding scatter matrix is multiplied with each channel of the 1x1 convolution output. In this example the number of active positions (1000) in the input tensor is the same as the number of active positions (1000) in the output tensor, thus each scatter matrix is a 1000x1000 matrix. The result of each matrix multiplication operation is thus a [1, 256, 1, 1000] tensor which comprises the partial outputs of the corresponding 1x1 convolution that are relevant to an active output position.
The outputs of the matrix multiplications are then combined through a series of tensor addition or accumulation operations. Each addition or accumulation operation adds the elements at corresponding positions. In the example shown in
The efficiency of implementing operations on an NNA may be measured by the number of multiply accumulate (MAC) operations to implement the operation, wherein a MAC operation is an operation that computes the product of two numbers and adds the product to another number. The number of MACs to implement a sparse submanifold convolution in this manner (i.e., via 1x1 convolutions, matrix multiplication, addition operations) can be represented by equation (1) wherein K is the size of the kernel (i.e. K=KH×KW), N is the number of active positions in the input tensor, Cin is the number of channels in the input tensor, and Cout is the number of channels in the output tensor (which is equal to the number of filters).
In other cases, instead of combining the relevant partial outputs generated by the 1x1 convolutions in block 1304 via matrix multiplications with scatter matrices and addition operations, where an NNA has a hardware component, such as a processor, that can selectively combine elements of one or more received tensors, then the combining of the relevant partial outputs may be performed by that hardware component. In these cases the hardware component may be configured to receive the outputs of the 1x1 convolutions and information identifying which partial outputs are relevant to each active element of the final output tensor. The hardware component (e.g. processor) may then be configured to retrieve the partial outputs relevant to each output element and combine them (e.g. via an addition operation) to generate that output element.
For example, if the 1x1 convolutions shown in
Reference is now made to
Like the inference graph of
However, instead of performing matrix multiplications and additions via hardware accelerators of the NNA, the outputs of the 1x1 convolutions are provided to a hardware component, such as a processor, of the NNA that can selectively combine elements of the 1x1 convolution outputs, along with information identifying the partial outputs relevant to each active element of the output tensor.
The maximum number of MACs to implement a sparse submanifold convolution in this manner (i.e., via 1x1 convolutions+scatter-add operations) can be represented by equation (2) wherein K is the size of the kernel (i.e. K=KH×KW), N is the number of active positions in the input tensor, Cin is the number of channels in the input tensor, and Cout is the number of channels in the output tensor (which is equal to the number of filters). K×N×Cin×Cout MAC operations are used to perform the 1x1 convolutions, and K×N×Cout MAC operations are used to perform the scatter-add.
It will be evident to a person of skill in the art that references to rows and columns of tensors and matrices herein are exemplary only and that rows and columns may be switched as appropriate. It can be seen, from equations (1) and (2), that performing scatter-add operations may, in some cases, be more efficient than combining the relevant partial outputs via matrix multiplications and additions.
While it is described above that block 1306 of the method 1300 of
Table 5 shows the results of implementing a 3x3 sparse submanifold convolution on an input tensor with 1000 active positions using the first NNA method described above (1x1 convolutions, matrix multiplications, additions), the second NNA method described above (1x1 convolutions, scatter-add via processor of NNA), and as a standard convolution on a 256x256 input tensor with 1.25% density (i.e., 1000 active positions) when run on the Applicant's PowerVR 3NX NNA running at 800 MHz and 20.48 GB/s. In these tests the MatMuls of the first NNA method were implemented on a CPU and thus the time taken to implement the MatMuls were not included in the timing numbers in Table 5; and the scatter-add operations performed by the processor of the NNA were not included in the timing numbers in Table 5.
The number of MACs to implement a standard 2D convolution is expressed by equation (3). This can be written in terms of N as shown in equation (4), where the denseRatio is as set out in equation (5).
It can be seen from Table 5 that a sparse submanifold convolution can be implemented much more efficiently using an NNA via either of the described methods which take advantage of the sparsity of the input and the hardware components of the NNA, compared to implementing the sparse submanifold convolution as a standard convolution on the NNA.
Table 6 shows the results of implementing a 3x3 sparse submanifold convolution on an input tensor with 1000 active positions using the second NNA method described above (1x1 convolutions, scatter-add via processor of NNA), and as a standard convolution on a 256x256 input tensor with 1.25% density (i.e., 1000 active positions) when run on the Applicant's PowerVR 4NX MC1 NNA running at 1.5 GHz and 38.4 GB/s.
Tables 5 and 6 show that a sparse submanifold convolution can be implemented significantly faster via an NNA using the described method(s), which take advantage of the sparsity of the input tensor, than implementing the sparse submanifold convolution via an NNA as a standard 2D convolution, which does not take advantage of the sparsity of the input tensor.
Combination of GPU and NNA Implementations of Sparse Submanifold ConvolutionA sequence of sparse submanifold convolutions may be implemented using a combination of the GPU and NNA implementations described above. Specifically, the first sparse submanifold convolution in the sequence may be implemented in accordance with the GPU method described above with respect to
A method of implementing a sequence of two sparse submanifold convolutions may comprise: receiving, at a graphics processing unit, an input tensor, in a dense format, to a first sparse submanifold convolution of the sequence; identifying, at the graphics processing unit, active positions of the input tensor; performing, at the graphics processing unit, an indexed unfold operation on the input tensor based on the identified active positions to generate an input matrix comprising elements of the input tensor in each active window of the input tensor; performing, at the graphics processing unit, a matrix multiplication between a weight matrix and the input matrix to generate an output matrix that comprises elements of an output tensor of the first sparse submanifold convolution based on the active windows; providing the output matrix to a neural network accelerator as an input tensor, in a sparse format, to a second sparse submanifold convolution of the sequence; performing, at the neural network accelerator, for each position of a kernel of the second sparse submanifold convolution, a 1x1 convolution between the received input matrix and weights of filters of the sparse submanifold convolution at that kernel position to generate a plurality of partial outputs; and combining appropriate partial outputs of the plurality of partial outputs to generate an output tensor, in sparse format, of the second sparse submanifold convolution. Where the GPU comprises a convolution accelerator such that the GPU may be considered to be an NNA, or may be considered to comprise an NNA, the steps of the above method identified as being performed by the NNA may be performed by the convolution accelerator of the GPU.
It has been described above how indexed unfold and fold operations can be used to implement a sparse submanifold convolution on a GPU. The inventors have determined that similar indexed unfold and fold operations can be used to more efficiently implement standard convolutions, standard deconvolutions and sparse submanifold deconvolutions with sparse inputs on a GPU. Methods for implementing these convolutions and deconvolutions using similar indexed unfold and fold operations will now be described. Each of these methods are particularly adapted to take into consideration the internal functioning of a GPU. In particular, the described methods are designed to take advantage of the parallel architecture (e.g. SIMD architecture) of GPUs.
GPU Implementation of Standard Convolution OperationAs described above, with respect to
The inventors have determined that standard 2D and 3D convolutions with a sparse input can be performed more efficiently on a GPU by using similar indexed unfold and fold operations as those described above with respect to the method 500 of
Reference is now made to
At block 2104, the GPU identifies the active locations or positions in the received input tensor. As described above, an active location of an input tensor is a height and width position or location in which at least one channel of the input tensor has a non-zero value. Each active location or position may be identified by a set or pair of indices-a height or row index, and a width or column index. The active locations or positions in the received input tensor may be identified using any suitable technique, such as, but not limited to, those described above with respect to block 504 of the method 500 of
In some cases, the output of this block 2104 may be a list of the height and width (or column and row) co-ordinates or indices of the active positions. As described above,
Once the active locations or positions in the received input tensor have been identified, the method 2100 proceeds to block 2106.
At block 2106, the GPU performs an indexed unfold operation (which may also be referred to as a sparse convolution unfold operation) on the received input tensor based on the active locations or positions identified in block 2104 to generate an input matrix that comprises the elements of the non-zero windows of the input tensor. A non-zero window is a window of the input tensor used in the standard convolution that comprises at least one non-zero element. As described above with respect to
However, where a window comprises all zeros (which may be referred to a zero window) it is not necessary to compute the output based on that window. Instead a zero can be placed at the corresponding output position. Accordingly, only the outputs corresponding to non-zero windows may be computed.
Therefore, the inventors have developed an indexed unfold operation for a standard convolution operation in which an input matrix is generated that only comprises a column for each non-zero window of the input tensor, wherein the non-zero windows of the input tensor are identified from the active locations or positions identified in block 2104 and the convolution parameters (e.g. kernel size, strides, dilations etc.). Specifically, the indexed unfold operation for a standard convolution operation uses the active locations or positions, and the convolution parameters (e.g. strides, dilation, kernel size) to identify the non-zero windows of the input tensor, and generates an input matrix with a column for each non-zero window that comprises the elements of the input tensor in that non-zero window. For example, if the input tensor 602 of
Using an indexed unfold operation, as opposed to a standard unfold operation, to implement a standard convolution can significantly reduce the size of the input matrix. This can significantly reduce the computations to implement the matrix multiplication of block 2108 (described below). Generally the sparser the input tensor, the smaller the input matrix generated by an indexed unfold operation, and the more memory efficient the method described with respect to
The indexed unfold operation may be implemented on the GPU in any suitable manner. In one example, the GPU may be configured to, for each identified active position, determine from the parameters of the convolution (kernel dimensions, strides, dilation), which window(s) that active position forms part of, and identify each such window as a non-zero window. Each non-zero window may be identified by a particular location in the window, such as, but not limited to, the first (e.g. top-left) element of the window, the middle element of the window, or the last (e.g. the bottom right) element of the window. For example,
Once the non-zero windows have been identified, the elements forming each non-zero windows are extracted from the input tensor and placed in a column of the input matrix. The elements forming each non-zero window may be extracted using any suitable technique. For example, the elements forming each non-zero window may be identified by indexing each non-zero window (e.g. from 0 to the number of non-zero windows) and, for each non-zero window, identifying the elements in that non-zero window using multiple nested loops, one for each dimension of a window. For example, where the windows are 2D with a height dimension and a width dimensions, there may be loop for the height and another for the width may be implemented. Specifically, for a 2D convolution with 2D windows, an offset may be created for each of the height and width dimensions that is equal to
where k is the size of the kernel in that dimension, and then the GPU may be configured to loop from −offset to +offset from the non-zero window centre. Where the windows are 3D with a channel dimension there may also be a channel loop that loops through the channels. The element at each identified position is then copied into the appropriate position of the input matrix.
As described above, each column in the input matrix corresponds to one non-zero window. The values at the offset positions, starting from top left, may be ordered from top down. In the example shown in
Reference is now made to
As described above, in some examples, performing an indexed unfold operation to implement a standard convolution may comprise determining, for each active position of the input tensor 2402 (each element shown in grey), the windows that the active position falls within. In
The windows of the input tensor that an active position falls within can be determined from the indices of the active positions, and the convolution parameters (e.g. kernel size, the strides and the padding). In general, the size of the kernel and strides determine the number of windows that an active position belongs to. For instance, a 3x3 kernel with stride 2x2, there are 2x2 different cases—as described in more detail below, in one case the active position will form part of only one window, in another case an active position will form part of two windows, in another case an active position will form part of two windows, and in another case an active position will form part of four windows. In contrast, a 3x3 kernel with stride 1x1, there is only single case and each active position will form part of nine different windows.
In the example shown in
Once the non-zero windows have been identified the elements forming each non-zero window are extracted from the input tensor 2402 and placed in a column of the input matrix 2404. As described above, in the example shown in
In some cases, in addition to receiving the input tensor in dense format, and the active positions, the indexed unfold operation may also receive a zeroed input matrix with the desired shape (e.g. height=elements per kernel×number of channels, width=number of non-zero windows) and the GPU may be configured to write the non-zero window elements to the appropriate location in the input matrix.
Once the GPU has performed an indexed unfold operation on the input tensor to generate an input matrix with the elements of each non-zero window of the input tensor, the method 2100 proceeds to block 2108.
At block 2108, the GPU performs a matrix multiplication operation (which may be referred to as a MatMul operation) between a weight matrix and the input matrix generated in block 2106 to generate an output matrix.
The matrix multiplication between the weight matrix 2502 and the input matrix 2404 generates an output matrix 2504 which comprises, for each non-zero window, an output element for each channel of the output (i.e., for each filter). The output matrix 2504 may have a column for each non-zero window and a row for each output channel such that each column comprises an output element for the corresponding non-zero window for each output channel. In the example shown in
Once the GPU has performed the matrix multiplication the method 2100 may end or the method 2100 may proceed to block 2110. Specifically, the output matrix 2504 generated in block 2108 comprises all of the non-zero elements of the output-tensor (in other words the output matrix corresponds to the output tensor in sparse format) and thus the output matrix 2504 may be simply output, or an output tensor in dense format may be first generated from the output matrix 2504.
At block 2110, the GPU performs an indexed fold operation on the output matrix generated in block 2108 to generate an output tensor in dense format. The indexed fold operation is the opposite of the indexed unfold operation performed in block 2106. Specifically, each non-zero window can be mapped to a 2D position in the output tensor in dense format, and the indexed fold operation uses this information to generate an output tensor in dense format (e.g. a densified output tensor) from the output matrix generated in block 2108. In other words, the indexed fold operation generates an output tensor with each element in the output matrix in the correct position and zeros elsewhere.
As described above, a standard fold operation (e.g. a col2im operation) receives an output matrix in dense format—i.e. an output matrix that comprises a row per output channel with an output element for each element of that channel—and converts each row of the received matrix to a plane of the output tensor in accordance with the size of a channel of the output tensor.
In contrast to a standard fold operation, an indexed fold operation receives an output matrix in sparse format—i.e., the received matrix does not necessarily comprise a value or element for each element of the output tensor. An indexed fold operation also receives information indicating the output position corresponding to each non-zero window. The information indicating the output position corresponding to each non-zero window may be the co-ordinates of the active locations in the input tensor determined in block 2104; the location of the non-zero windows identified in block 2106 to perform the indexed unfold operation; or the co-ordinate of the position in the output tensor corresponding to each non-zero window (which may be determined in block 2106 as part of the indexed unfold operation). The position in the output tensor corresponding to each non-zero window is then used to place the elements of the sparse output matrix in the correct location of the output tensor in dense format. The elements at all other positions of the output tensor may then be set to zero.
If the co-ordinates of the position of the output tensor corresponding to each non-zero window are not explicitly provided then they may be determined from the provided information. For example, as described above, the non-zero windows can be identified from the active positions of the input tensor and the convolution parameters (e.g. strides, kernel size etc.). Each non-zero window can then be mapped to a 2D position in the output tensor based on the convolution parameters. For example, as shown in
The indexed fold operation may be implemented on the GPU by, for example, creating an index for each non-zero window (e.g. from 0 to n where there are n+1 non-zero windows). For each non-zero window, the GPU may then be configured to determine the associated location in the output tensor from a location in that window (e.g. the centre of the window) and the convolution parameters, then loop through each output channel and place or copy the appropriate element of the output matrix in the determined location in the corresponding channel of the output tensor. Each non-zero window may get its own thread so that the indexed fold operation can be performed in parallel on the GPU.
In some cases, in addition to receiving, information indicating the positions of the final output tensor corresponding to each non-zero window, the convolution parameters and the output matrix generated in block 2108, the indexed fold operation may also receive a zeroed output tensor of the appropriate dimensions and may write the elements of the received matrix to the received output tensor. In such cases, zeros may not be explicitly added to the positions of the output tensor that do not comprise an element of the output matrix.
Once the indexed fold operation has been performed, the method 2100 may end.
Test ResultsTables 8, 9 and 10 show the average time or duration, in ms, to implement a 2D convolution on a GPU for 2D input tensors of sizes 128x128, 1000x1000 and 10000x10000 respectively using different methods and different levels of sparsity. The methods include: (1) the method described above with respect to
The UnFoldNd, PyTorch Conv2D, and PyTorch Un/Fold methods are known 2D convolution methods which do not take into account sparsity. Accordingly, the time to implement a 2D convolution operation using these methods doesn't change with the sparsity because each of these methods perform the same operations regardless of the sparsity (i.e., perform the full 2D convolution). It can be seen from Tables 8 to 10 that when the input is large enough and the sparsity is high enough (e.g. 80% or above), the method described herein with respect to
Reference is now made to
It can be seen from
A deconvolution, which may also be referred to a convolution transpose or a fractionally strided convolution, is the reverse operation of a convolution. Specifically, a convolution can typically be represented as a matrix multiplication between an input vector AV and a sparse matrix C as shown in equation (6) where the non-zero elements of the sparse matrix C are the weights w of the filter W. The input vector AV is the elements of the input tensor unrolled from left to right and top to bottom (and front to back if 3D). For example, as shown in
In contrast, in a deconvolution the input tensor A is processed by transposing the sparse matrix C for the corresponding direct convolution to generate a transposed sparse matrix CT and performing a matrix multiplication between the input vector AV and the transposed sparse matrix CT as shown in equation (7).
As is known to those of skill in the art, a matrix is transposed by converting the rows of the matrix into columns and converting the columns into rows. For example,
Where a convolution may produce an output tensor B that is smaller, in the height and/or width dimension, relative to the input tensor A, a deconvolution may produce an output tensor B that is larger, in the height and/or width dimension, relative to the input tensor A. For example, as shown in
As described in the Applicant's GB Patent No. 2582352, which is incorporated herein by reference in its entirety, a deconvolution can be implemented by performing a plurality of direct or standard convolutions on the input tensor to the deconvolution and interleaving the outputs of the direct convolutions to generate the output of the deconvolution. Specifically, each filter of the deconvolution is divided into a plurality of sub-filters; a convolution operation is performed between each sub-filter and the input tensor to generate a sub-output tensor; and the elements of the sub-output tensors are interleaved to generate a channel of the final output.
For example,
It can be seen in
In general, the number of sub-filters (per filter) to implement a particular deconvolution is based on the stride(s) of the deconvolution. In particular, there will be stride_h+stride_w+stride_p sub-filters per filter where stride_h is the stride in the height dimension, stride_w is the stride in the width dimension, and stride_p is the stride in the p or channel dimension. For example, where the filter W is one-dimensional (1D) in the width dimension there will be stride_w sub-filters. In particular, for a deconvolution with a 1D filter and stride_w=4 there will be 4 sub-filters. Where the filter W is two-dimensional (2D) and the filter moves in both width and height dimensions with respect to the input tensor A, there will be stride_w+stride_h sub-filters. Where the filter W is three-dimensional (3D) the number of sub-filters may depend on the number of directions or dimensions in which the filter moves with respect to the input tensor. For example, in a 2D deconvolution a 3D filter is only moved in the width and height dimensions (or the x and y dimensions) with respect to a 3D input tensor, so there will only be stride_w*stride_h sub-filters per filter. In contrast, in a 3D convolution a 3D filter moves in the height, width and p dimensions with respect to the 3D input tensor, thus there will be stride_h*stride_w*stride_p sub-filters per filter.
In general, the dimension of the kernel of each sub-filter will be w_sub_filter_max*h_sub-filter_max wherein
In some cases, the sub-filters of a filter W may be generated by forming a stride_w*stride_h*stride_p base block of filter weights from the origin of the filter W. The origin of a filter is the filter weight that is aligned with a particular input element to generate an output element for that input element. The origin of a filter is typically the first filter weight, the last filter weight or the centre filter weight, but it can be any filter weight. Once the base block is formed each sub-filter is formed from the filter weights at the stride increments starting from one of the filter weights in the base block and generating a reflected version of that filter.
For example, as shown in
A direct convolution with strides of 1 is then performed between the input tensor and each sub-filter. Each sub-filter thus generates a sub-output tensor. The elements of the sub-output tensors corresponding to the same filter are then interleaved to generate a channel of the output tensor. In general the output elements of the sub-output tensors are interleaved in sub-filter order in accordance with the stride in each direction. Specifically, if the deconvolution has a stride in the width dimension that is greater than 1 (i.e. stride_w>1) each row of a channel of the output tensor is generated by selecting elements from stride_w sub-output tensors in a round-robin manner. If the deconvolution has a stride in the height dimension that is greater than 1 (i.e. stride_h>1) every stride_hth row is generated by selecting elements from the same stride_w sub-output tensors. For example for a deconvolution that generates an output tensor with 4x4 channels with stride_w=2 and stride_h=2 there will be four sub-filters per filter numbered 1 to 4. The first row and the third row of a channel of the output tensor 3502 are generated by alternating between elements of the 1st and 2nd sub-output tensors and the second and fourth rows are generated by alternating between elements of the 3rd and 4th sub-output tensors as shown in
The inventors have determined that standard 2D and 3D deconvolutions with a sparse input can be performed efficiently on a GPU by using similar indexed unfold and fold operations as those described above with respect to the method 500 of
Reference is now made to
At block 3604, the GPU identifies the active locations or positions in the received input tensor. As described above, an active location in an input tensor is a height and width position or location in which at least one channel of the input tensor has a non-zero value or element. Each active location or position may be identified by a set or pair of indices-a height or row index, and a width or column index. The active locations or positions in the received input tensor may be identified using any suitable technique, such as, but not limited to, those described above with respect to block 504 of the method 500 of
In some cases, the output of this block 3604 may be a list of the height and width (or column and row) co-ordinates or indices of the active positions. As described above,
Once the active locations or positions in the received input tensor have been identified, the method 3600 proceeds to block 3606.
At block 3606, the GPU performs an indexed unfold operation (which may also be referred to as a sparse deconvolution unfold operation) on the received input tensor based on the active locations or positions identified in block 3604 to generate an input matrix that comprises the elements of the non-zero sub-windows of the input tensor. As described above, a deconvolution can be implemented by performing a direct convolution between the input tensor and each of a plurality of sub-filters and interleaving the results of the direct convolutions. A non-zero sub-window is a window of the input tensor used in the direct convolutions that has at least one non-zero element. The size of the sub-windows for a deconvolution are based on the size of the sub-filters. As described above, the size of the sub-filters is based on the size of the deconvolution filters and the strides. Specifically, as described above, the sub-filters for a 2D deconvolution will be of size wsub_filter_max×hsub_filter_max×Cin wherein
For example, as described above with respect to
Performing an indexed unfold operation may comprise, identifying, from the active locations or positions of the input tensor identified in block 3604 and the deconvolution parameters (e.g. strides, dilation, kernel size), the non-zero sub-windows of the input tensor; and, for each non-zero sub-window, extracting the elements of that non-zero sub-window from the input tensor and placing them in the input matrix. In some cases, the input matrix comprises a column for each non-zero sub-window and a row for each position in a sub-window, and all of the elements that form a non-zero sub-window are placed in the same column. In some cases, the elements in a sub-window are unrolled from left to right and top to bottom, such that the element in the top left corner of the sub-window is placed in the first row, and the element in the bottom right of the sub-window is placed in the last row. However, it will be evident to a person of skill in the art that this is an example only.
The indexed unfold operation may be implemented on the GPU in any suitable manner. In one example, the GPU may be configured to, for each identified active position of the input tensor, determine from the parameters of the deconvolution (kernel dimensions, strides, dilation), which sub-window(s) of the input tensor that active position forms part of, and identify each such sub-window as a non-zero sub-window. Each non-zero sub-window may be identified by a particular location in the window, such as, but not limited to, the first (e.g. top-left) element of the sub-window, the middle element of the sub-window, or the last (e.g. the bottom right) element of the sub-window.
For example,
Once the non-zero sub-windows have been identified, the elements forming each unique non-zero sub-window may be extracted from the input tensor and placed in the input matrix. It is noted that more than one active position may belong to the same non-zero sub-window, but the elements of that sub-window only need to be placed in the input matrix once. This may be implemented by, for example, indexing each unique non-zero sub-window (e.g. from 0 to the number of unique non-zero sub-windows) and, for each non-zero sub-window, identifying the elements in that non-zero sub-window using one or more nested loops. There may be one loop for each dimension of the sub-window. For example, where the sub-window is a 2D window with height and width dimensions there may be a height loop and a width loop. Specifically, when the deconvolution is a 2D deconvolution and the sub-windows are 2D, an offset may be created for the height and the width dimensions that is equal to
where k is the size of the sub-kernel in that dimension, and then the height loop may be configured to loop from Y-offset_y to Y, the width loop may be configured to loop from X-offset_x to X where Y,X is the bottom right corner of the sub-window. If the sub-windows are 3D and have more than one channel, a channel loop may be configured to loop through the channels. The element at each identified position is then copied into the appropriate position of the input matrix.
For example, as described above, the bottom right corner of each non-zero sub-window is identified in
In some cases, in addition to receiving the input tensor in dense format, the active positions, and one or more deconvolution parameters, the indexed unfold operation may also receive a zeroed input matrix with the desired shape (e.g. height=elements per sub-kernel×number of channels of input tensor, width=number of non-zero sub-windows) and the GPU may be configured to write the non-zero sub-window elements to the appropriate location in the input matrix.
Once the GPU has performed an indexed unfold operation on the input tensor to generate an input matrix with the elements of each unique non-zero sub-window of the input tensor, the method 3600 proceeds to block 3608.
At block 3608, the GPU performs a matrix multiplication operation (which may be referred to as a MatMul operation) between a weight matrix and the input matrix generated in block 3606 to generate an output matrix.
The matrix multiplication between the weight matrix 3802 and the input matrix 3704 generates an output matrix 3804 which comprises, for each non-zero sub-window, an output element for each sub-filter. The output matrix 3804 may have a column for each non-zero sub-window and a row for each sub-filter such that each column comprises an output element for the corresponding non-zero sub-window for each sub-filter. In the example shown in
Once the GPU has performed the matrix multiplication the method 3600 may end or the method 3600 may proceed to block 3610. Specifically, the output matrix 3804 generated in block 3608 comprises all of the non-zero elements of the output tensor and thus the output matrix 3804 may be simply output, or an output tensor in dense format may be first generated from the output matrix 3804.
At block 3610, the GPU performs an indexed fold operation on the output matrix generated in block 3608 to generate an output tensor in dense format. The indexed fold operation is the opposite of the indexed unfold operation. Specifically, each non-zero sub-window can be mapped to a plurality of positions of the output tensor (one per sub-filter of a filter) and the indexed fold operation uses this information to generate an output tensor in dense format (e.g. a densified output tensor) from the output matrix generated in block 3608. In other words, the indexed fold operation uses this information to generate an output tensor with each element in the output matrix in the correct position and zeros elsewhere.
As described above, a standard fold operation (e.g. a col2im operation) receives an output matrix in dense format—i.e. an output matrix that comprises a row per output channel with an output element for each element of that channel—and converts each row of the received matrix to a plane of the output tensor in accordance with the size of a channel of the output tensor.
In contrast to a standard fold operation, an indexed fold operation receives an output matrix in sparse format—i.e., the received matrix does not comprise a value or element for each element of the output tensor. An indexed fold operation also receives information indicating the plurality of positions of the output tensor associated with each non-zero sub-window. This information may comprise information identifying the active positions in the input tensor (from which the positions of the output tensor associated with each non-zero sub-window can be determined), information identifying the non-zero sub-windows (from which the positions of the output tensor associated with each non-zero sub-window may be determined), or information explicitly identifying the co-ordinates of the positions of the output tensor associated with each non-zero window (which may, for example, be determined as part of block 3606). The received information is then used to place the elements of the sparse output matrix in the correct location of the output tensor in dense format. The elements at all other positions may then be set to zero.
For example, as described above, the non-zero sub-windows can be identified from the active positions and the deconvolution parameters (e.g. strides, kernel size etc.). Each non-zero sub-window can then be mapped to multiple positions in the output tensor, one for each sub-filter. For example, as shown in
Table 12 illustrates a mapping between each of the non-zero sub-windows and the corresponding 2x2 block in the output tensor, and
The indexed fold operation may be implemented on the GPU by, for example, creating an index for each non-zero sub-window (e.g. from 0 to n where there are n+1 non-zero sub-windows). For each non-zero sub-window, the GPU may then be configured to determine the associated locations in the output tensor from a location in that sub-window (e.g. the bottom right of the sub-window) and the deconvolution parameters, then loop through each sub-filter and place or copy the element of the output matrix corresponding to that sub-filter in the determined location of the output tensor. Each non-zero sub-window may get its own thread so that the indexed fold operation can be performed in parallel on the GPU.
In some cases, in addition to receiving information indicating the positions of the output tensor associated with each non-zero sub-window, the deconvolution parameters and the output matrix generated in block 3608, the indexed fold operation may also receive a zeroed output tensor of the appropriate dimensions and may write the elements of the received matrix to the received output tensor. In these cases, a zero does not have to be explicitly placed in the positions of the output tensor not associated with a non-zero sub-window.
Once the indexed fold operation has been performed, the method 3600 may end.
GPU Implementations of Sparse Submanifold DeconvolutionsA sparse submanifold deconvolution is similar to a sparse submanifold convolution in that not all of the elements of the output tensor are generated. However, where the outputs that are generated for a sparse submanifold convolution are driven by the active positions in the input tensor, the outputs that are generated for a sparse submanifold deconvolution are driven by the desired active positions in the output tensor, which may be referred to as the scatter positions or the target positions.
The inventors have determined that sparse submanifold 2D and 3D deconvolutions can be performed efficiently on a GPU by using similar indexed unfold and fold operations as those described above with respect to the method 500 of
Reference is now made to
At block 4004, the GPU receives information identifying the active positions of the output tensor, which may be referred to as the target positions of the output tensor or the scatter positions. Each active location or position may be identified by a set or pair of indices-a height or row index, and a width or column index.
For example, the GPU may receive information indicating that the nine positions of the output tensor 4102 of a 3x3 deconvolution with strides of 2 shown in grey in
Once the active locations or positions in the output tensor have been received, the method 4000 proceeds to block 4006.
At block 4006, the GPU performs an indexed unfold operation (which may also be referred to as a sparse submanifold deconvolution unfold operation) on the received input tensor based on the active locations or positions of the output tensor identified in block 4004 to generate an input matrix that comprises the elements of the sub-windows of the input tensor that generate the identified output positions. As described above, a deconvolution can be implemented by performing a direct convolution between the input tensor and each of a plurality of sub-filters and interleaving the results of the direct convolutions. The sub-windows for a deconvolution are based on the size of the sub-filters. As described above, the size of the sub-filters is based on the size of the filters and the strides of the deconvolution. For example, as described above with respect to
Performing an indexed unfold operation may comprise, identifying, from the active locations or positions of the output tensor identified in block 4004 and the deconvolution parameters (e.g. strides, dilation, kernel size), the sub-window of the input tensor and the sub-filter used to generate each active position of the output tensor; and, for each identified sub-window, extracting the elements of that sub-window from the input tensor and placing them in the input matrix. In some cases, the input matrix may comprise a column for each identified sub-window and a row for each position in a sub-window and all of the elements that form a sub-window are placed in the same column. In some cases, the elements in a sub-window are unrolled from left to right and top to bottom, such that the element in the top left corner of the sub-window is placed in the first row, and the element in the bottom right of the sub-window is placed in the last row. However, it will be evident to a person of skill in the art that this is an example only.
The indexed unfold operation may be implemented on the GPU in any suitable manner. In one example, the GPU may be configured to, for each identified active position in the output, determine from the parameters of the deconvolution (kernel dimensions, strides, dilation), which sub-window of the input tensor and which sub-filter of the plurality of sub-filters are used to generate the element(s) at that output position. Each sub-window may be identified by a particular location in the sub-window, such as, but not limited to, the first (e.g. top-left) element of the sub-window, middle element of the sub-window, or the last (e.g. the bottom right) element of the sub-window.
For example,
If each 2x2 block of the output tensor is defined by its bottom right position, then the 2x2 block that an active position in the output tensor belongs to will have a bottom right position defined by the nearest multiple of two greater than or equal to the indices. For example, the active position (2,2) has two even indices so the relevant 2x2 block has a bottom right position of (2,2); and the active position (4,11) has one even index and one odd index so the relevant 2x2 block has a bottom right position of (4, 12). The 2x2 block of the output to which each active position of the output belongs is shown in
The relevant 2x2 sub-window of the input tensor can then be identified from the relevant 2x2 block of the output tensor by dividing the indices of the relevant 2x2 block of the output tensor by the stride (e.g. 2) in this example. For example, the relevant 2x2 sub-window of the tensor for an element that belongs to a 2x2 block of the output defined by (0,20) is the 2x2 sub-window of the input tensor defined by (0, 10). The 2x2 sub-window of the input tensor relevant to each active position in the output tensor is shown in
The sub-filter that is relevant to an active position of the output tensor is based on where that active position is located within its relevant 2x2 block of the output tensor. If an active position is in the upper left corner of a block of the output tensor then sub-filter 1 is relevant, if an active position is the upper right corner of a block of the output tensor then sub-filter 2 is relevant, if an active position is in the lower left corner of a block of the output tensor then sub-filter 3 is relevant, and if an active position is in the lower right corner of a block of the output tensor then sub-filter 4 is relevant. The sub-filter relevant to each active position in the output tensor is shown in
Once the relevant sub-windows and sub-filters have been identified, the elements forming each identified sub-window are extracted from the input tensor and placed in the input matrix. This may be implemented by, for example, indexing each unique relevant sub-window (e.g. from 0 to the number of relevant sub-windows) and, for each unique relevant sub-window, identifying the elements in that sub-window using a plurality of nested loops, one for each dimension of the sub-windows. Where the for example, the sub-window is two dimensional with a height dimension and a width dimension there may be one offset loop for the height dimension and another for the width dimension. Specifically, for a 2D sparse submanifold deconvolution with 2D sub-filters with a height dimension and a width dimension an offset may be created for each of the height and width dimensions that is equal to
where k is the size of the sub-filter in that dimension, and then the height loop may be configured to loop from Y-offset_y to Y, and the width loop may be configured to loop from X-offset_x to X where Y,X is the bottom right corner of the sub-window. Where the sub-windows are 3D and comprise more than one channel, there may also be a channel loop that loops through the channels. The element of the input tensor at each identified position is then copied into the appropriate position of the input matrix.
For example, for each relevant sub-window of the input tensor 4104 identified in
In some cases, in addition to receiving the input tensor in dense format, and the active positions of the output tensor, the indexed unfold operation may also receive a zeroed input matrix with the desired shape (e.g. height=elements per sub-kernel×number of input channels, width=number of unique relevant sub-windows) and the GPU may be configured to write the sub-window elements to the appropriate location in the input matrix.
Once the GPU has performed an indexed unfold operation on the input tensor to generate an input matrix with the elements of each relevant sub-window of the input tensor, the method 4000 proceeds to block 4008.
At block 4008, the GPU performs a matrix multiplication operation (which may be referred to as a MatMul operation) between a weight matrix and the input matrix generated in block 4006 to generate an output matrix.
The matrix multiplication between the weight matrix 4202 and the input matrix 4106 generates an output matrix 4204 which comprises, for each relevant sub-window, an output element for each relevant sub-filter. The output matrix 4204 may have a column for each relevant sub-window and a row for each sub-filter such that each column comprises an output element for the corresponding relevant sub-window for each relevant sub-filter. In the example shown in
Once the GPU has performed the matrix multiplication the method 4000 may end or the method 4000 may proceed to block 4010. Specifically, the output matrix 4204 generated in block 4008 comprises all of the elements at the target positions of the output tensor and thus the output matrix 4204 may be simply output, or an output tensor in dense format may be first generated from the output matrix 4204.
At block 4010, the GPU performs an indexed fold operation on the output matrix generated in block 4008 to generate an output tensor in dense format. The indexed fold operation is the opposite of the indexed unfold operation. Specifically, the indexed fold operation uses the target output positions, and the relevant sub-filter information to generate an output tensor in dense format (e.g. a densified output tensor) from the output matrix generated in block 4008. In other words, the indexed fold operation generates an output tensor with the desired elements from the output matrix in the correct position and zeros elsewhere.
As described above, a standard fold operation (e.g. a col2im operation) receives an output matrix in dense format—i.e. an output matrix that comprises a row per output channel with an output element for each element of that channel—and converts each row of the received matrix to a plane of the output tensor in accordance with the size of a channel of the output tensor.
In contrast to a standard fold operation, an indexed fold operation receives an output matrix in sparse format—i.e., the received matrix does not necessarily comprise a value or element for each element of the output tensor. An indexed fold operation to implement a sparse submanifold deconvolution also receives information (e.g. indices) indicating the target positions in the output tensor and information indicating the relevant sub-filter for each desired position in the output tensor (this may be the relevant sub-filter, if computed in block 4006, or information (such as the deconvolution parameters (e.g. kernel size, strides etc.)) which can be used to determine the relevant sub-filter for an active position in the output tensor). The target location/position information and the information indicating the sub-filter relevant to each target position are used to identify the desired elements of the sparse output matrix and place them in the correct location of the output tensor in dense format. The elements at all other positions may then be set to zero.
The indexed fold operation may be implemented on the GPU by, for example, creating an index for each active position in the output tensor (e.g. from 0 to n where there are n+1 active positions in the output tensor). For each active position in the output tensor, the GPU may then be configured to select the appropriate element from the output tensor and place it at that active position in a channel of the output tensor. For example,
In some cases, in addition to receiving the target output positions, and information indicating the relevant sub-filters, the deconvolution parameters and the output matrix generated in block 4008, the indexed fold operation may also receive a zeroed output tensor of the appropriate dimensions and may write the elements of the received output matrix to the received output tensor. In such cases, a zero may not be explicitly placed at the non-target positions of the output tensor.
Once the indexed fold operation has been performed, the method 4000 may end.
Neural NetworkAny of the methods described above for implementing a convolution or a deconvolution may be implemented as part of processing data in accordance with a neural network to, for example, perform a signal processing task such as, but not limited to, an image processing task or a computer vision task. For example, the method of
The methods described above have proven particularly efficient, in terms of time and computing resources, at performing convolutions and deconvolutions on highly sparse input data (e.g. above 80% sparsity).
Example NNAReference is now made to
The input unit 4402 is hardware configured to receive and store the input data to the neural network accelerator 4400. The input data may be received from external memory (i.e., memory external to the NNA 4400). In some examples, the input unit 4402 may comprise one or more buffers to store the received input data. Although the example NNA 4400 of
Each processing unit 4404, 4406, 4408, 4410, is itself an accelerator configured to accelerate performing one or more neural network operations on input data. Specifically, each processing unit 4404, 4406, 4408, 4410 is configured to receive an input tensor and perform, via hardware logic, one or more operations on the input tensor to generate an output tensor. The NNA 4400 of
The element-wise operations processing unit 4406 is hardware configured to receive input data (e.g. an input tensor) and perform an element-wise operation on the input data (e.g. input tensor), optionally with another data set (e.g. another tensor) which may be obtained or retrieved from external memory (e.g. memory external to the NNA). An element-wise operation is a same operation that is performed on each element of the input data/tensor (e.g. each input data value or each tensel). Element-wise operations which may be performed on the input data include, but are not limited to, add, multiply, maximum, and minimum.
The other data set/tensor may be the same size (e.g. have the same dimensions) as the input data/tensor such that corresponding elements of the two tensors are combined using an element-wise operation. Alternatively, the other data set/tensor and the input data/tensor may have a different size or dimensions. If, for example, the mismatching dimension of one of the tensors is of size 1, an element-wise operation may be performed between the input data/tensor and the other data set/tensor using a broadcast technique wherein the smaller tensor is broadcast (or expanded) to the size of the other tensor. For example, a tensor of size [N, H, W, C]=[1, 10, 1, 10] can be combined element-wise with a tensor of size [N, H, W, C]=[1, 10, 10, 10] by expanding the W dimension of the first tensor
It will be evident to a person of skill in the art that this is just an example set of processing units and that other NNAs may have additional processing units, fewer processing units and/or different processing units depending, for example, on the type of neural networks they are intended to process. In some cases, one or more of the processing units may be combined.
The output unit 4412 is hardware configured to receive the output tensor generated by processing the input data via one or more processing units 4404, 4406, 4408, 4410. In some cases, the output unit 4412 may have a buffer or other storage for temporarily storing all or a portion the output tensor prior to outputting the output tensor from the NNA 4400. In some cases, the output unit 4412 may be configured to save the output tensor in external memory (i.e., memory that is external to the neural network accelerator).
The interconnection hardware 4414 statically or dynamically connects the input unit, one or more processing units, and the output unit to allow input data to the neural network accelerator to flow through (e.g. be processed by) one or more processing units and then be output from the neural network accelerator. In some cases, the interconnection hardware 4414 may comprise fixed hardware connections between the input unit 4402, the processing units 4404, 4406, 4408, 4410 and the output unit 4412 that allow data to flow through the units in a limited number of ways. However, in other cases, the interconnection hardware 4414 may comprises hardware that can dynamically connect the units 4402-4412 of the neural network accelerator in a plurality of different ways in response to one or more control signals. For example, the interconnection hardware 4414 may comprise a crossbar and the units 4402-4412 may be connected to the crossbar in such a manner that the crossbar can dynamically connect the units in a plurality of different ways in response to one or more control signals. For example, in one hardware pass the crossbar may connect the output of the input unit 4402 to the input of the convolution processing unit 4404, connect the output of the convolution processing unit 4404 to the input of the element-wise operations processing unit 4406, and then connect the output of the element-wise operations processing unit 4406 to the input of the output unit 4412 so that the input data for the hardware pass is processed by the convolution processing unit 4404 then the element-wise operations processing unit 4406. In another hardware pass, the crossbar may connect the output of the input unit 4402 to the input of the convolution processing unit 4404, and then the output of the convolution processing unit 4404 to the input of the output unit 4412 so that the input data for the hardware pass is processed only by the convolution processing unit 4404. Accordingly, in these cases the connections between the units 4402-4412 of the neural network accelerator (and thus the manner in which data may flow through the units of the NNA) are not fixed or static.
Although, not shown, the units 4402-4412 and the interconnection hardware 4414 of the NNA may receive control information for each hardware pass indicating which units are to be active or used in the hardware pass and how each active unit and the interconnection hardware 4414 are to be configured for that hardware pass. The control information may also indicate other information such as the formats of the input and output data of the units.
In some cases, the neural network accelerator 4400 may also comprise an embedded processor 4416 which can receive control instructions to perform more complicated operations (such as a scatter-add operations).
Reference is now made to
Each convolution engine 4502 comprises hardware logic configured to receive a set of weights {k1, k2 . . . , k8} that represent all or a portion of a filter, and a set of input data values {x1, x2, . . . , x8} that represent all or a portion of a window of the input data, and perform a multiply-accumulate calculation on the received weights and input data values. In some examples, as shown in
Since it may take more than one hardware pass of the convolution engines 4502 to generate a complete filter result (e.g. because a convolution engine may only receive and process a portion of the weights of a filter and/or a portion of the input data values of a window in a cycle), the convolution processing unit 4404 may comprise a plurality of accumulators 4504. A pass of the convolution engines comprises receiving a set of weights and a set of input data values and performing a multiply-accumulate operation thereon. Each accumulator 4504 receives the output of one convolution engine 4502 and adds the output to previous convolution engine outputs that relates to the same filter. Since a convolution engine 4502 may not generate or produce outputs that relate to the same filter in consecutive cycles the partial results of one or more filters may be stored in an accumulation buffer 4506 and then the appropriate partial results may be provided to the accumulators 4504 each cycle by the accumulation buffer 4506.
In some cases, the convolution processing unit 4404 may comprise or have access to an input buffer 4508 for storing the elements of the input tensor and a coefficient buffer 4510 for storing the weights of the convolution. In some cases the input buffer 4508 may be implemented as a plurality of banks of memory. In these cases, there may be a multiplexor (not shown) for each convolution engine 4502 that is coupled to each of bank of the input buffer to allow the data stored in any of the banks to be selectively directed to any of the convolution engines 4502.
The neural network accelerator, convolution processing unit, and convolution engine of
The neural network accelerators and graphics processing units described herein may be embodied in hardware on an integrated circuit. The neural network accelerators or graphics processing units described herein may be configured to perform any of the methods described herein. Generally, any of the functions, methods, techniques or components described above can be implemented in software, firmware, hardware (e.g., fixed logic circuitry), or any combination thereof. The terms “module,” “functionality,” “component”, “element”, “unit”, “block” and “logic” may be used herein to generally represent software, firmware, hardware, or any combination thereof. In the case of a software implementation, the module, functionality, component, element, unit, block or logic represents program code that performs the specified tasks when executed on a processor. The algorithms and methods described herein could be performed by one or more processors executing code that causes the processor(s) to perform the algorithms/methods. Examples of a computer-readable storage medium include a random-access memory (RAM), read-only memory (ROM), an optical disc, flash memory, hard disk memory, and other memory devices that may use magnetic, optical, and other techniques to store instructions or other data and that can be accessed by a machine.
The terms computer program code and computer readable instructions as used herein refer to any kind of executable code for processors, including code expressed in a machine language, an interpreted language or a scripting language. Executable code includes binary code, machine code, bytecode, code defining an integrated circuit (such as a hardware description language or netlist), and code expressed in a programming language code such as C, Java or OpenCL. Executable code may be, for example, any kind of software, firmware, script, module or library which, when suitably executed, processed, interpreted, compiled, executed at a virtual machine or other software environment, cause a processor of the computer system at which the executable code is supported to perform the tasks specified by the code.
A processor, computer, or computer system may be any kind of device, machine or dedicated circuit, or collection or portion thereof, with processing capability such that it can execute instructions. A processor may be or comprise any kind of general purpose or dedicated processor, such as a CPU, GPU, NNA, System-on-chip, state machine, media processor, an application-specific integrated circuit (ASIC), a programmable logic array, a field-programmable gate array (FPGA), or the like. A computer or computer system may comprise one or more processors.
It is also intended to encompass software which defines a configuration of hardware as described herein, such as HDL (hardware description language) software, as is used for designing integrated circuits, or for configuring programmable chips, to carry out desired functions. That is, there may be provided a computer readable storage medium having encoded thereon computer readable program code in the form of an integrated circuit definition dataset that when processed (i.e., run) in an integrated circuit manufacturing system configures the system to manufacture a neural network accelerator or graphics processing unit configured to perform any of the methods described herein, or to manufacture a neural network accelerator or a graphics processing unit comprising any apparatus described herein. An integrated circuit definition dataset may be, for example, an integrated circuit description.
Therefore, there may be provided a method of manufacturing, at an integrated circuit manufacturing system, a neural network accelerator or a graphics processing unit as described herein. Furthermore, there may be provided an integrated circuit definition dataset that, when processed in an integrated circuit manufacturing system, causes the method of manufacturing a neural network accelerator or a graphics processing unit to be performed.
An integrated circuit definition dataset may be in the form of computer code, for example as a netlist, code for configuring a programmable chip, as a hardware description language defining hardware suitable for manufacture in an integrated circuit at any level, including as register transfer level (RTL) code, as high-level circuit representations such as Verilog or VHDL, and as low-level circuit representations such as OASIS® and GDSII. Higher level representations which logically define hardware suitable for manufacture in an integrated circuit (such as RTL) may be processed at a computer system configured for generating a manufacturing definition of an integrated circuit in the context of a software environment comprising definitions of circuit elements and rules for combining those elements in order to generate the manufacturing definition of an integrated circuit so defined by the representation. As is typically the case with software executing at a computer system so as to define a machine, one or more intermediate user steps (e.g., providing commands, variables etc.) may be required in order for a computer system configured for generating a manufacturing definition of an integrated circuit to execute code defining an integrated circuit so as to generate the manufacturing definition of that integrated circuit.
An example of processing an integrated circuit definition dataset at an integrated circuit manufacturing system so as to configure the system to manufacture a neural network accelerator or a graphics processing unit will now be described with respect to
The layout processing system 4804 is configured to receive and process the IC definition dataset to determine a circuit layout. Methods of determining a circuit layout from an IC definition dataset are known in the art, and for example may involve synthesising RTL code to determine a gate level representation of a circuit to be generated, e.g., in terms of logical components (e.g., NAND, NOR, AND, OR, MUX and FLIP-FLOP components). A circuit layout can be determined from the gate level representation of the circuit by determining positional information for the logical components. This may be done automatically or with user involvement in order to optimise the circuit layout. When the layout processing system 4804 has determined the circuit layout it may output a circuit layout definition to the IC generation system 4806. A circuit layout definition may be, for example, a circuit layout description.
The IC generation system 4806 generates an IC according to the circuit layout definition, as is known in the art. For example, the IC generation system 4806 may implement a semiconductor device fabrication process to generate the IC, which may involve a multiple-step sequence of photo lithographic and chemical processing steps during which electronic circuits are gradually created on a wafer made of semiconducting material. The circuit layout definition may be in the form of a mask which can be used in a lithographic process for generating an IC according to the circuit definition. Alternatively, the circuit layout definition provided to the IC generation system 4806 may be in the form of computer-readable code which the IC generation system 4806 can use to form a suitable mask for use in generating an IC.
The different processes performed by the IC manufacturing system 4802 may be implemented all in one location, e.g., by one party. Alternatively, the IC manufacturing system 4802 may be a distributed system such that some of the processes may be performed at different locations, and may be performed by different parties. For example, some of the stages of: (i) synthesising RTL code representing the IC definition dataset to form a gate level representation of a circuit to be generated, (ii) generating a circuit layout based on the gate level representation, (iii) forming a mask in accordance with the circuit layout, and (iv) fabricating an integrated circuit using the mask, may be performed in different locations and/or by different parties.
In other examples, processing of the integrated circuit definition dataset at an integrated circuit manufacturing system may configure the system to manufacture a neural network accelerator without the IC definition dataset being processed so as to determine a circuit layout. For instance, an integrated circuit definition dataset may define the configuration of a reconfigurable processor, such as an FPGA, and the processing of that dataset may configure an IC manufacturing system to generate a reconfigurable processor having that defined configuration (e.g., by loading configuration data to the FPGA).
In some embodiments, an integrated circuit manufacturing definition dataset, when processed in an integrated circuit manufacturing system, may cause an integrated circuit manufacturing system to generate a device as described herein. For example, the configuration of an integrated circuit manufacturing system in the manner described above with respect to
In some examples, an integrated circuit definition dataset could include software which runs on hardware defined at the dataset or in combination with hardware defined at the dataset. In the example shown in
The implementation of concepts set forth in this application in devices, apparatus, modules, and/or systems (as well as in methods implemented herein) may give rise to performance improvements when compared with known implementations. The performance improvements may include one or more of increased computational performance, reduced latency, increased throughput, and/or reduced power consumption. During manufacture of such devices, apparatus, modules, and systems (e.g., in integrated circuits) performance improvements can be traded-off against the physical implementation, thereby improving the method of manufacture. For example, a performance improvement may be traded against layout area, thereby matching the performance of a known implementation but using less silicon. This may be done, for example, by reusing functional blocks in a serialised fashion or sharing functional blocks between elements of the devices, apparatus, modules and/or systems. Conversely, concepts set forth in this application that give rise to improvements in the physical implementation of the devices, apparatus, modules, and systems (such as reduced silicon area) may be traded for improved performance. This may be done, for example, by manufacturing multiple instances of a module within a predefined area budget.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.
Claims
1. A method of implementing a sparse submanifold convolution on a graphics processing unit, the method comprising:
- receiving, at the graphics processing unit, an input tensor in a dense format;
- identifying, at the graphics processing unit, active positions of the input tensor;
- performing, at the graphics processing unit, an indexed unfold operation on the input tensor based on the identified active positions of the input tensor to generate an input matrix comprising elements of the input tensor in each active window of the input tensor; and
- performing, at the graphics processing unit, a matrix multiplication between a weight matrix and the input matrix to generate an output matrix that comprises elements of an output tensor of the sparse submanifold convolution based on the active windows.
2. The method of claim 1, wherein the input tensor has at least a height dimension, a width dimension and a channel dimension and an active position of the input tensor is a height and width position in which at least one channel of the input tensor has a non-zero element.
3. The method of claim 2, wherein identifying the active positions of the input tensor comprises:
- identifying a height, width, and channel position of each non-zero element in the input tensor;
- identifying unique height and width pairs from the identified height, width and channel positions; and
- identifying the unique height and width pairs as the active positions.
4. The method of claim 1, wherein an active window of the input tensor is a window of the input tensor, used to generate an element of the output tensor of the sparse submanifold convolution, in which one or more predetermined positions within the window are active positions.
5. The method of claim 1, wherein performing the indexed unfold operation on the input tensor comprises identifying, from the identified active positions and one or more parameters of the sparse submanifold convolution, the active windows of the input tensor.
6. The method of claim 5, wherein identifying, from the identified active positions and the one or more parameters of the sparse submanifold convolution, the active windows of the input tensor comprises, for each identified active position:
- determining if the active position is at a predetermined position within a window of the input tensor for the sparse submanifold convolution; and
- in response to determining that the active position is at the predetermined position within a window of the input tensor for the sparse submanifold convolution, identifying the window as an active window.
7. The method of claim 5, wherein the active windows of the input tensor are based on a version of the input tensor that is padded such that when the sparse submanifold convolution has a stride of one in each sparse submanifold convolution dimension there is an active window for each active position, and identifying, from the identified active positions and the one or more parameters of the sparse submanifold convolution, the active windows of the input tensor comprises identifying the active window corresponding to each active position.
8. The method of claim 1, wherein performing the indexed unfold operation on the input tensor comprises identifying the elements of the input tensor in each active window.
9. The method of claim 8, wherein identifying the elements of the input tensor in an active window comprises implementing a series of nested loops, the series of nested loops comprising a loop for each dimension of an active window that loops from a predetermined position within the window through the elements of the active window in that dimension.
10. The method of claim 1, wherein performing the indexed unfold operation on the input tensor comprises storing the elements of each active window in the input matrix.
11. The method of claim 10, further comprising receiving a zeroed input matrix, and the elements of each active window of the input tensor are stored in the received input matrix.
12. The method of claim 1, further comprising performing, at the graphics processing unit, an indexed fold operation on the output matrix based on the active windows to generate the output tensor of the sparse submanifold convolution in a dense format.
13. The method of claim 12, wherein there is an active window for each active position of the input tensor, and performing the indexed fold operation on the output matrix comprises placing each element in the output matrix at the corresponding active position in a channel of the output tensor.
14. The method of claim 13, wherein performing the indexed fold operation on the output matrix comprises, for each active window, looping through each channel of the output tensor and placing the element of the output matrix corresponding to that active position and that channel at the active position of that channel in the output tensor.
15. The method of claim 12, wherein performing the indexed fold operation on the output matrix comprises identifying a position in the output tensor of each element in the output matrix, based on the active windows and one or more parameters of the sparse submanifold convolution, and placing each element of the output matrix in the corresponding identified position in the output tensor.
16. The method of claim 12, further comprising receiving a zeroed output tensor, and the elements of the output matrix are stored in the received output tensor.
17. The method of claim 12, wherein performing the indexed fold operation on the output matrix further comprises storing zeroes at each position of the output tensor that does not comprise an element of the output matrix.
18. The method of claim 1, wherein the input matrix comprises a column for each active window of the input tensor and each column of the input matrix comprises the elements of the input tensor in the corresponding active window; and
- wherein the weight matrix comprises a row for each filter to be applied to the input tensor and each row of the weight matrix comprises all weights forming the corresponding filter.
19. A graphics processing unit configured to perform the method as set forth in claim 1.
20. A non-transitory computer readable storage medium having stored thereon computer readable code configured to cause a graphics processing unit to perform the method as set forth in claim 1 when the code is run.
Type: Application
Filed: Feb 29, 2024
Publication Date: Sep 26, 2024
Inventors: Gunduz Vehbi Demirci (Hertfordshire), Cagatay Dikici (Hertfordshire), Grant Michael Stevens (Hertfordshire), Le Yang (Hertfordshire)
Application Number: 18/591,092