METHOD AND APPARATUS FOR PERFORMING DEPTHWISE CONVOLUTION OPERATION BASED ON A SYSTOLIC ARRAY
The present disclosure relates to a method and apparatus for performing depthwise convolution operation based on a systolic array, and a method of operating a systolic array including a plurality of processing element chains according to an embodiment of the present disclosure may include preloading a weight value to at least one of a plurality of first processing elements included in a first processing element chain, one of the plurality of processing element chains, through a first weight data link that is a column input link, providing a first input frame to a first processing element disposed at a tail portion of the first processing element chain among the plurality of first processing elements through a first input data link that is a row input link, obtaining a first output value by continuously performing operations from the first processing element disposed at the tail portion to a first processing element disposed at a head portion of the first processing element chain, based on a weight value preloaded on the first input frame and each of the plurality of first processing elements, through the first input data link, and obtaining a first cumulative sum value by continuously performing operations from the first processing element disposed at the head portion to the first processing element disposed at the tail portion, based on a weight value preloaded on the first output value and at least one of the plurality of first processing elements, through the first cumulative sum link, which is the row input link.
Latest Research & Business Foundation Sungkyunkwan University Patents:
- METHOD AND APPARATUS FOR VIDEO CODING FOR REDUCING BLOCK BOUNDARY DISCONTINUITY
- ANODE ELECTRODE, MANUFACTURING METHOD THEREOF AND SECONDARY BATTERY USING THE SAME
- APPARATUS AND METHOD FOR CONTROLLING PERMANENT MAGNET SYNCHRONOUS MOTOR, AND STORAGE MEDIUM STORING INSTRUCTIONS TO PERFORM METHOD FOR CONTROLLING PERMANENT MAGNET SYNCHRONOUS MOTOR
- LITHIUM-SULFUR BATTERY ELECTRODE MATERIAL, ELECTRODE, AND LITHIUM-SULFUR BATTERY INCLUDING THE SAME
- METHODS AND APPARATUS FOR INFERRING FUNCTION SYMBOL NAMES FROM ASSEMBLY CODE IN AN EXECUTABLE BINARY WITH TRANSFORMER-BASED ARCHITECTURE, AND RECORDING MEDIUM
This application claims priority to and the benefit of Korean Patent Application No. 10-2023-0057115 filed in the Korean Intellectual Property Office on May 2, 2023, the entire contents of which are incorporated herein by reference.
BACKGROUND OF THE INVENTION Field of the InventionThe present disclosure relates to a method and apparatus for performing depthwise convolution on a systolic array, and particularly to a method and apparatus for performing depthwise convolution operation based on a systolic array.
Description of the Related ArtIn general, systolic arrays are optimized for general two-dimensional matrix multiplication (GEMM) operations. When calculating the general convolution layer of a deep learning model, the systolic array performs operations by converting the convolution layer into a two-dimensional matrix multiplication operation and then operating it. However, in the case of depthwise convolution, when converted to 2-dimensional matrix multiplication, the converted matrix contains many parts that are empty as zero. Excluding these zero values, among the systolic arrays composed of each S processing elements (total SXS) horizontally and vertically, only S processing elements participate in actual calculations. As a result, problems may occur in which the utilization of processing elements is greatly reduced when performing depthwise convolution operation, and in which the processing time of the entire deep learning model also becomes slower.
SUMMARY OF THE INVENTIONThe purpose of the present disclosure is to solve the above problems, and to provide a method and apparatus for performing depthwise convolution operations based on a systolic array for effectively accelerating depthwise convolution operations in a systolic array.
A method of operating a systolic array including a plurality of processing element chains according to an embodiment of the present disclosure may comprise preloading a weight value to at least one of a plurality of first processing elements included in a first processing element chain, one of the plurality of processing element chains, through a first weight data link that is a column input link, providing a first input frame to a first processing element disposed at a tail portion of the first processing element chain among the plurality of first processing elements through a first input data link that is a row input link, obtaining a first output value by continuously performing operations from the first processing element disposed at the tail portion to a first processing element disposed at a head portion of the first processing element chain, based on a weight value preloaded on the first input frame and each of the plurality of first processing elements, through the first input data link, and obtaining a first cumulative sum value by continuously performing operations from the first processing element disposed at the head portion to the first processing element disposed at the tail portion, based on a weight value preloaded on the first output value and at least one of the plurality of first processing elements, through the first cumulative sum link, which is the row input link.
Each of the plurality of processing elements included in the first processing element chain may include one of an active processing element or a non-active processing element, and the preloading a weight value to at least one of a plurality of processing elements included in the first processing element chain, one of the plurality of processing element chains, through the weight data link that is the column input link, may load a weight value to processing elements including the active processing element among the plurality of processing elements.
In the first processing element chain, a plurality of the active processing elements and one non-active processing element may be arranged in series.
The weight values may be loaded based on column-major order.
The method may further comprise preloading a weight value to at least one of a plurality of second processing elements included in a second processing element chain, one of the plurality of processing element chains, through a second weight data link that is a column input link, providing a second input frame to a second processing element disposed at a tail portion of the second processing element chain among the plurality of second processing elements through a second input data link that is a row input link, obtaining a second output value by continuously performing operations from the second processing element disposed at the tail portion of the second processing elements to a second processing element disposed at a head portion of the second processing element chain, based on a weight value preloaded on the second input frame and each of the plurality of second processing elements, through the second input data link, and obtaining a second cumulative sum value by continuously performing operations from the second processing element disposed at the head portion to the second processing element disposed at the tail portion, based on a weight value preloaded on the second output value and at least one of the plurality of second processing elements, through the second cumulative sum link, which is the row input link.
The providing a second input frame to a second processing element disposed at a tail portion of the second processing element chain among the plurality of processing elements through a second input data link that is a row input link may include, providing a second input frame to the second processing element disposed in the tail portion based on an (M+1) cycle, the M may be a time required for one of the first processing elements to perform an operation based on the preloaded weight and the first output value, and the cycle may be a difference between a time provided to the first processing element in which the first input frame is disposed in the tail portion and a time to obtain the first cumulative sum.
In the step of obtaining the second output value, at least one of the second processing elements may not perform the operation based on the second input frame and the preloaded weight value.
In the step of obtaining the second cumulative sum value, at least one of the second processing elements may not perform the operation based on the second output value and the preloaded weight value.
According to the present disclosure, a structure for effectively accelerating depthwise convolution in a systolic array is proposed to be possible to reuse input in two directions without hindering the area and power efficiency of the systolic array optimized for matrix operations and general convolution operations, which can increase the speed of depthwise convolution operations.
According to the present disclosure, it is possible to improve the operation performance of deep learning models by effectively accelerating deep learning models using depthwise convolution such as MobileNet, EfficientNet, and MobileViT through a systolic array.
The accompanying drawings, which are included to further understand the present disclosure, included in the present disclosure and constitute a part of the present application, represent embodiments of the present disclosure along with detailed descriptions that illustrate the principles of the present disclosure.
Hereinafter, embodiments disclosed in the present disclosure will be described in detail with reference to the accompanying drawings, but the same or similar elements are denoted by the same reference numerals regardless of the reference numerals, and redundant descriptions thereof will be omitted. The suffixes “module” and “unit” of elements used in the following description are given or used interchangeably in consideration of only the ease of writing the specification, and do not themselves have a distinct meanings or roles. In addition, in describing the embodiments disclosed in the present disclosure, when it is determined that a detailed description of related known technologies may obscure the subject matter of the embodiments disclosed in the present disclosure, the detailed description thereof will be omitted. In addition, the accompanying drawings are for easy understanding of the embodiments disclosed in the present disclosure, the technical spirit disclosed in the present disclosure is not limited by the accompanying drawings, and are to be understood as including all modifications, equivalents, and alternatives included in the spirit and scope of the present disclosure.
While terms, such as “first”, “second”, etc., may be used to describe various elements, the elements are not limited by the above terms. The above terms are used only for the purpose of distinguishing one element from another element.
When an element is referred to as being “electrically coupled” or “connected” to another element, it should be understood that other element may exist in the middle although the element may be directly electrically coupled or connected to the other element. On the other hand, when an element is referred to as being “directly electrically coupled” or “directly connected” to another element, it should be understood that there is no other element in the middle. Expressions in the singular include plural expressions unless the context clearly indicates otherwise.
In the present disclosure, it should be understood that terms such as “comprises” or “have” are intended to designate the presence of features, numbers, steps, operations, elements, parts, or combinations thereof described in the specification, but do not preclude the possibility of the presence or addition of one or more other features, numbers, steps, operations, elements, parts, or combinations thereof.
Referring to
Each processing element 100 may have three types of data links: an input value data link (IL1 to ILn), a weight data link (WL1 to WLn), and a first partial sum data link (PA1 to PAn). In the case of the systolic array of the prior art, the input value data link (IL1 to ILn) may be a row input link, and the weight data link (WL1 to WLn) and the first partial sum data link (PA1 to PAn) may be a column input link.
Referring to
That is, in the systolic array 10 according to the prior art, the directions of the input value data link (IL1 to ILn) and the first partial sum data link (PA1 to PAn) are orthogonal to each other, but in the systolic array 20 according to an embodiment of the present disclosure, the directions of the input value data link (IL1 to ILn) and the second partial sum data link (FPA1 to FPAn) are in the same direction, which is the difference.
Meanwhile, each of the processing elements 200 is assigned to each dimension of the depthwise convolution. For example, if the systolic array 20 includes S processing elements 200 horizontally and vertically, the number of processing elements 200 may be SXS. The processing elements 200 included in the same row may form a processing element chain. In the case of
In the depthwise convolution, there are Ih×Iw input values for each channel, which are called input maps. In the present disclosure, among the input maps, the area that is currently being inserted into the systolic array is referred to as an input frame. The size of the input frame will be described later.
Among the processing element chains (e.g. (PE chain 1 to PE chain s) in
First, the description will be based on the case where M is 0, and the case where M is 1 or more will be described later. The input values of the input frame are inserted into the processing element chain in column-major order, and in one column, all input values corresponding to the height of the input frame are input, and then the input values of the next column are input sequentially.
Like the systolic array according to the prior art (e.g. systolic array 10 in
Among the processing elements included in the processing element chain, KXK processing elements are pre-loaded with actual weight values, where KXK corresponds to the size of the convolution filter. Except for this, the remaining processing elements are loaded with zero, not the actual weight value. A processing element loaded with a weight value is called an active processing element, and an processing element loaded with zero rather than a weight value is called a non-active processing element. Starting from the head of the processing element chain, weight values are also loaded in column-major order, and the K weight values corresponding to one filter column are followed by several non-active processing elements. The number of processing elements determines the degree of reuse of input values. The number of non-active processing elements will be described later.
In cycle 6 of
In the next cycle, new input values enter the input frame and processing element chain, and the oldest input value leaves the input frame and input element chain. Since all three input values for the second column were input in the previous cycle, cycle 7 moves to the next column and input i02 is entered as a new input value, and i00, the oldest input value, is excluded from the input frame and input element chain. In cycle 7, the input frame is no longer a rectangular area. At this time, i10, i20, i11, and i21 over the 2×2 area from the i10 input value located at the head of the input element chain are the convolution window corresponding to the second output value, and in
If the vertical movement of the convolution window continues, each input value can be reused up to K times. However, in this example, because a small input frame size was used, only one step in the vertical direction is possible, however, if the height of the input frame is set large, the convolution window can be moved vertically several times.
In the subsequent cycle, cycle 8, the correct convolution window is not formed from the processing element chain head, and cycle 8 becomes a skip cycle that passes without generating the correct output value.
Afterwards, in cycle 9, the processing element chain head is again positioned at the top of one row, and correct convolution window formation is possible. The convolution window generated at this time is equivalent to moving the window used in previous cycles by one step in the horizontal direction, and the overlapping input values i01 and i11 are reused again. If the convolution window continues to move horizontally, each input value can be reused up to KXK times, including reuse due to horizontal movement and reuse due to vertical movement. This corresponds to the maximum number that one input value can be reused in depthwise convolution by.
The number of moves possible in the vertical direction is determined by the height of the input frame, while the number of moves possible in the horizontal direction is independent of the size of the input frame. Depending on the size of the input map or output map, horizontal movement of the convolution window can continue until there is no more room left to move.
To solve this, when M is 1 or more, it can be adjusted the frequency with which the input value is supplied to the input chain element. Specifically, each input value is input one at a time every M+1 cycle. In this case, partial sum accumulation is possible even if a delay of M cycles is added at each step through the partial sum data link configured in the opposite direction of the input data link. This is explained in detail as follows.
As shown in
Referring to
In Equation 1, for example, if the horizontal axis length of the systolic array is 16 (including 16 processing elements) and the convolution filter size used is 3×3, Fh is 5 and R is 3. That is, in this configuration, repeated vertical and horizontal input is reused through the input frame, and when moving the convolution window, output values corresponding to three rows of the output map can be operated at once. According to the present disclosure, when operating depthwise convolution, it is possible to improve the calculation speed by up to KXK times compared to a matrix product-based depthwise convolution operation.
Referring to
Here, each of the plurality of processing elements included in the first processing element chain includes one of an active processing element or a non-active processing element, and the preloading a weight value to at least one of a plurality of processing elements included in the first processing element chain, one of the plurality of processing element chains, through the weight data link that is the column input link, may load a weight value to processing elements including the active processing element among the plurality of processing elements.
Here, in the first processing element chain, a plurality of the active processing elements and one non-active processing element may be arranged in series.
The systolic array may obtain a second cumulative sum value (S920). Here, the method may include preloading a weight value to at least one of a plurality of second processing elements included in a second processing element chain, one of the plurality of processing element chains, through a second weight data link that is a column input link, providing a second input frame to a second processing element disposed at a tail portion of the second processing element chain among the plurality of second processing elements through a second input data link that is a row input link, obtaining a second output value by continuously performing operations from the second processing element disposed at the tail portion of the second processing elements to a second processing element disposed at a head portion of the second processing element chain, based on a weight value preloaded on the second input frame and each of the plurality of second processing elements, through the second input data link, and obtaining a second cumulative sum value by continuously performing operations from the second processing element disposed at the head portion to the second processing element disposed at the tail portion, based on a weight value preloaded on the second output value and at least one of the plurality of second processing elements, through the second cumulative sum link, which is the row input link.
Here, the providing a second input frame to a second processing element disposed at a tail portion of the second processing element chain among the plurality of processing elements through a second input data link that is a row input link includes, providing a second input frame to the second processing element disposed in the tail portion based on an (M+1) cycle, the M may be a time required for one of the first processing elements to perform an operation based on the preloaded weight and the first output value, and here, the cycle may be a difference between a time provided to the first processing element in which the first input frame is disposed in the tail portion and a time to obtain the first cumulative sum.
In the step of obtaining the second output value, at least one of the second processing elements may not perform the operation based on the second input frame and the preloaded weight value.
In the step of obtaining the second cumulative sum value, at least one of the second processing elements may not perform the operation based on the second output value and the preloaded weight value.
In
Referring to
Table 1 is a table comparing the areas of semiconductor logic circuits when the five methods of
By applying the method of the present disclosure across evaluated MobileNet-V2, ShuffleNet, EfficientNet B0, and MobileViT, the depthwise convolution operation performance is increased by 2.9 times, 3.7 times, and 3.1 times on 16×16, 32×32, and 64×64 PE arrays, respectively. Based on the overall model operation time, the 32×32 PE array is reduced by 51%, 27%, 60%, and 41% for each model. In terms of power consumption, it showed an average of 22% lower power consumption than HeSA, which showed a similar reduction in time.
Most of the terms used in the present disclosure are selected from common ones widely used in the field, but some terms are arbitrarily selected by the applicant and their meanings are described in detail in the following description as necessary. Accordingly, the present disclosure should be understood based on the intended meaning of the terms and not the mere names or meanings of the terms.
It is obvious to those skilled in the art that the present disclosure can be embodied in other specific forms without departing from the essential features of the present disclosure. Accordingly, the above detailed description should not be construed as restrictive in all respects and should be considered illustrative. The scope of the present disclosure should be determined by reasonable interpretation of the appended claims, and all changes within the equivalent scope of the present disclosure are included in the scope of the present disclosure.
Claims
1. A method of operating a systolic array including a plurality of processing element chains, comprising:
- preloading a weight value to at least one of a plurality of first processing elements included in a first processing element chain, one of the plurality of processing element chains, through a first weight data link that is a column input link;
- providing a first input frame to a first processing element disposed at a tail portion of the first processing element chain among the plurality of first processing elements through a first input data link that is a row input link;
- obtaining a first output value by continuously performing operations from the first processing element disposed at the tail portion to a first processing element disposed at a head portion of the first processing element chain, based on a weight value preloaded on the first input frame and each of the plurality of first processing elements, through the first input data link; and
- obtaining a first cumulative sum value by continuously performing operations from the first processing element disposed at the head portion to the first processing element disposed at the tail portion, based on a weight value preloaded on the first output value and at least one of the plurality of first processing elements, through the first cumulative sum link, which is the row input link.
2. The method of claim 1, wherein each of the plurality of processing elements included in the first processing element chain includes one of an active processing element or a non-active processing element, and
- wherein the preloading a weight value to at least one of a plurality of processing elements included in the first processing element chain, one of the plurality of processing element chains, through the weight data link that is the column input link, loads a weight value to processing elements including the active processing element among the plurality of processing elements.
3. The method of claim 2, wherein in the first processing element chain, a plurality of the active processing elements and one non-active processing element are arranged in series.
4. The method of claim 1, wherein the weight values are loaded based on column-major order.
5. The method of claim 1, further comprising:
- preloading a weight value to at least one of a plurality of second processing elements included in a second processing element chain, one of the plurality of processing element chains, through a second weight data link that is a column input link;
- providing a second input frame to a second processing element disposed at a tail portion of the second processing element chain among the plurality of second processing elements through a second input data link that is a row input link;
- obtaining a second output value by continuously performing operations from the second processing element disposed at the tail portion of the second processing elements to a second processing element disposed at a head portion of the second processing element chain, based on a weight value preloaded on the second input frame and each of the plurality of second processing elements, through the second input data link; and
- obtaining a second cumulative sum value by continuously performing operations from the second processing element disposed at the head portion to the second processing element disposed at the tail portion, based on a weight value preloaded on the second output value and at least one of the plurality of second processing elements, through the second cumulative sum link, which is the row input link.
6. The method of claim 5, wherein the providing a second input frame to a second processing element disposed at a tail portion of the second processing element chain among the plurality of processing elements through a second input data link that is a row input link includes,
- providing a second input frame to the second processing element disposed in the tail portion based on an (M+1) cycle,
- wherein the M is a time required for one of the first processing elements to perform an operation based on the preloaded weight and the first output value, and
- wherein the cycle is a difference between a time provided to the first processing element in which the first input frame is disposed in the tail portion and a time to obtain the first cumulative sum.
7. The method of claim 5, wherein, in the step of obtaining the second output value, at least one of the second processing elements does not perform the operation based on the second input frame and the preloaded weight value.
8. The method of claim 5, wherein, in the step of obtaining the second cumulative sum value, at least one of the second processing elements does not perform the operation based on the second output value and the preloaded weight value.
Type: Application
Filed: May 1, 2024
Publication Date: Nov 7, 2024
Applicant: Research & Business Foundation Sungkyunkwan University (Suwon-si)
Inventors: Hyungmin CHO (Seoul), Mingeon PARK (Suwon-si), Seokjin HWANG (Seoul)
Application Number: 18/652,352