DEEP NEURAL NETWORK TRAINING ACCELERATOR AND OPERATION METHOD THEREOF

A deep neural network training accelerator includes an operational unit sequentially performing first and second operations on a plurality of input data of a sub-set according to a mini-batch gradient descent, a determination unit determining each of the input data as one of skip data and training data based on a confidence matrix obtained by the first operation, and a control unit controlling the operational unit to skip the second operation with respect to the skip data.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATION

This U.S. non-provisional patent application claims priority under 35 U.S.C. § 119 of Korean Patent Application No. 10-2020-0089210, filed on Jul. 12, 2020, the contents of which are hereby incorporated by reference in its entirety.

BACKGROUND 1. Field of Disclosure

The present disclosure relates to a deep neural network training accelerator and an operation method of the deep neural network training accelerator. More particularly, the present disclosure relates to a deep neural network training accelerator based on a prediction reliability according to a mini-batch gradient descent and an operation method of the deep neural network training accelerator.

2. Description of the Related Art

A deep neural network (DNN) provides state-of-the-art performance in many fields, such as an image recognition/classification, an object detection, or the like, based on many parameters and computational quantities.

However, since the DNN requires a large quantity of computation to train numerous parameters, it takes a long time such as a day or a week to train the DNN. Accordingly, in order to decrease the time and energy consumed to train the DNN, it is effective to reduce the quantity of computation itself required to train the DNN.

In general, the DNN is trained based on a mini-batch gradient descent. As noise is essentially accompanied with the mini-batch gradient descent, it is possible to approximate the computation required for the training instead of precisely calculating.

However, the DNN training based on the mini-batch gradient descent also needs a long training time and lots of training energy. Thus, there is a need for a way to distinguish between operations that are important for the training and operations that are not important for the training and to apply effective approximations to relatively less important operations.

SUMMARY

The present disclosure provides a deep neural network training accelerator capable of increasing its training speed and reducing its training energy.

The present disclosure provides an operation method of the deep neural network training accelerator.

Embodiments of the inventive concept provide a deep neural network training accelerator including an operational unit sequentially performing first and second operations on a plurality of input data of a sub-set according to a mini-batch gradient descent, a determination unit determining each of the input data as one of skip data and training data based on a confidence matrix obtained by the first operation, and a control unit controlling the operational unit to skip the second operation with respect to the skip data.

The operational unit performs the second operation with respect to the training data after a predetermined time lapses from a time point at which the first operation is performed.

The first operation is a first training stage of the mini-batch gradient descent, which uses a forward propagation algorithm.

The second operation is a second training stage of the mini-batch gradient descent, which sequentially uses a backward propagation algorithm and a weight update algorithm.

The determination unit is implemented as a comparator that compares a largest element of the confidence matrix with a predetermined threshold value.

The comparator outputs a low signal corresponding to the skip data to the control unit when a value of the largest element is equal to or greater than the predetermined threshold value.

The comparator outputs a high signal corresponding to the training data to the control unit when a value of the largest element is smaller than the predetermined threshold value.

The control unit parallelizes the second operation with respect to the training data in response to the low signal.

The number of the low signals is inversely proportional to an operation time of the second operation.

The deep neural network training accelerator further includes an input unit assigning each of the input data arbitrarily selected from total input data to the operational unit and an output unit summing each variation in weight output through the operational unit to output a variation in output weight corresponding to a gradient of the sub-set.

The operational unit has a systolic array structure and includes a plurality of operational devices that sequentially performs the first and second operations.

The operational unit initializes any one operational device corresponding to the skip data among the operational devices in response to a parallelization control signal applied thereto from the control unit.

The operational unit reassigns a portion of the training data assigned to the other operational devices among the operational devices to the any one operational device.

The control unit reassigns a plurality of sub-data divided from each of the training data to the operational devices according to a data flow.

The data flow refers to a data movement path for reading and storing data.

Embodiments of the inventive concept provide a method of operating a deep neural network training accelerator. The method includes allowing an operational unit to perform a first operation on a plurality of input data of a sub-set according to a mini-batch gradient descent, allowing a determination unit to determine the input data as one of skip data and training data based on a confidence matrix obtained by the first operation, allowing a control unit to output a parallelization control signal to skip a second operation with respect to the skip data in response to the skip data, and allowing the operational device to skip the second operation with respect to the skip data and to perform the second operation on the training data based on the parallelization control signal.

The first operation is a first training stage of the mini-batch gradient descent, which uses a forward propagation algorithm.

The second operation is a second training stage of the mini-batch gradient descent, which sequentially uses a backward propagation algorithm and a weight update algorithm.

The determination unit is implemented as a comparator that compares a largest element of the confidence matrix with a predetermined threshold value.

The performing of the second operation includes allowing the operational unit to initialize any one operational device corresponding to the skip data among a plurality of operational devices in response to the parallelization control signal, allowing the operational unit to reassign a portion of each of the training data assigned to the other operational devices to the any one operational device, and allowing the operational unit to process the second operation with respect to the training data in parallel using the operational devices after a predetermined time lapses from a time point at which the first operation is performed.

According to the above, a total amount of training operation to output a variation in weight is significantly reduced, and thus, energy consumption also decreases.

BRIEF DESCRIPTION OF THE DRAWINGS

The above and other advantages of the present disclosure will become readily apparent by reference to the following detailed description when considered in conjunction with the accompanying drawings wherein:

FIG. 1 is a block diagram showing a deep neural network training accelerator according to an exemplary embodiment of the present disclosure;

FIG. 2 is a view showing a mini-batch gradient descent;

FIG. 3 is a block diagram showing an example of the deep neural network training accelerator of FIG. 1;

FIG. 4 is a view showing a first operation performed by an operational unit of FIG. 3;

FIG. 5 is a view showing a second operation performed by an operational unit of FIG. 3;

FIG. 6 is a flowchart showing an operation performed by the deep neural network training accelerator of FIG. 1; and

FIG. 7 is a flowchart showing a second operation performed by an operational unit of FIG. 5.

DETAILED DESCRIPTION

Hereinafter, embodiments of the present disclosure be described with reference to specific embodiments and the accompanying drawings. However, the embodiments of the present disclosure may be modified in various other forms, and the scope of the present disclosure is not limited to the embodiments described below. In addition, embodiments of the present disclosure are provided to more fully describe the present disclosure to those skilled in the art. Accordingly, the shape and size of elements in the drawings may be exaggerated for a clearer description, and elements indicated by the same reference numerals in the drawings are the same elements.

In addition, in order to clearly describe the present disclosure in the drawings, parts irrelevant to the description are omitted, and thicknesses are enlarged to clearly express various layers and regions, and components having the same function within the scope of the same idea have the same reference. Further, throughout the specification, when a part “includes” a certain component, it means that the component may further include other components, not to exclude other components, unless otherwise stated.

FIG. 1 is a block diagram showing a deep neural network training accelerator 10 according to an exemplary embodiment of the present disclosure, and FIG. 2 is a view showing a mini-batch gradient descent.

Referring to FIGS. 1 and 2, the deep neural network training accelerator 10 may include an operational unit 100, a determination unit 200, and a control unit 300.

The operational unit 100 may sequentially perform first and second operations on a plurality of input data of a sub-set according to the mini-batch gradient descent (MGD).

In this case, the sub-set may include the input data arbitrarily selected from total input data. For example, the input data may include one of data to classify or detect images, data to determine medical information, data for an autonomous driving, and data for security and system management.

As shown in FIG. 2, the mini-batch gradient descent (MGD) may be a set of neural network training schedule that sequentially performs the first and second operations to train a variation in weight with respect to each input data of the sub-set. In this case, a sum of the variation in weight of the each input data may be a variation in output weight of the sub-set.

The mini-batch gradient descent (MGD) may include a training method that sequentially uses a forward propagation algorithm (FP), a backward propagation algorithm (BP), and a weight update algorithm (WU).

In this case, the first operation may be a first training stage of the mini-batch gradient descent (MGD) that uses the forward propagation algorithm FP, and the second operation may be a second training stage of the mini-batch gradient descent (MGD) that uses the backward propagation algorithm BP and the weight update algorithm WU. That is, the second operation may be performed after a predetermined time lapses from a time point at which the first operation is performed.

Then, the determination unit 200 may determine each of the input data as one of skip data and training data based on a confidence matrix obtained by the first operation.

As shown in FIG. 2, a largest element of the confidence matrix may determine a loss state corresponding to a training contribution.

For example, as shown in FIG. 2, when the largest element of the confidence matrix is equal to or greater than a predetermined threshold value, the confidence matrix may determine a low loss state corresponding to a low training contribution. In addition, when the largest element of the confidence matrix is smaller than the predetermined threshold value, the confidence matrix may determine a high loss state corresponding to a high training contribution.

The determination unit 200 according to the exemplary embodiment may be implemented as a comparator that compares the largest element among elements of the confidence matrix with the predetermined threshold value. For example, as shown in FIG. 3, the threshold value of the comparator may be previously set to a value of about 0.9.

The comparator may output a determination signal, which determines the input data as one of the skip data and the training data, to the control unit 300 based on the compared result. In the present exemplary embodiment, the determination signal may include one low signal and one high signal.

In detail, when the largest element among the factors of the confidence matrix is smaller than the predetermined threshold value, the determination unit 200 may output the high signal that determines the input data corresponding to the confidence matrix as the training data. In addition, when the largest element among the elements of the confidence matrix is equal to or greater than the predetermined threshold value, the determination unit 200 may output the low signal that determines the input data corresponding to the confidence matrix as the skip data.

Then, the control unit 300 may control the operational unit 100 to skip the second operation with respect to the skip data based on the determination signal determined by the determination unit 200.

In detail, the control unit 300 may receive the low signal corresponding to the skip data determined by the determination unit 200. In this case, the control unit 300 may output a parallelization control signal to the operational unit 100 in response to the low signal to skip the second operation with respect to the corresponding skip data, and thus, the second operation of the operational unit 100 may be controlled. Accordingly, the operational unit 100 may skip the second operation with respect to the skip data based on the parallelization control signal.

In addition, the control unit 300 may parallelize the second operation with respect to at least one training data in response to the low signal.

In detail, the control unit 300 may reassign a portion of each of the at least one training data to an operational device to which the skip data are assigned, and thus, may parallelize the second operation for the at least one training data.

For example, the control unit 300 may divide the at least one training data into a plurality of first sub-data and a plurality of second sub-data and may reassign one of the first and second sub-data to the operational device to which the skip data are assigned.

In this case, the number of the low signals may be inversely proportional to an operation time of the second operation. For example, as the number of the low signals increases, the operation time of the second operation by the operational unit 100 may decrease. In addition, as the number of the low signals decreases, the operation time of the second operation by the operational unit 100 may increase.

According to the exemplary embodiment, when the number of the low signals is smaller than the number of the high signals, the control unit 300 may reassign the portion of the training data to the operational device to which the skip data are assigned in order of the largest data size.

The deep neural network training accelerator 10 according to the exemplary embodiment of the present disclosure may identify the input data as one of the skip data and the training data using the determination unit 200 based on the confidence matrix, and thus, may determine the training contribution of each of the input data. In this case, the deep neural network training accelerator 10 may skip the second operation of the operational unit 100 with respect to the skip data using the control unit 100, and thus, may significantly reduce a training operation amount required to output the variation in output weight.

FIG. 3 is a block diagram showing an embodiment of the deep neural network training accelerator 11 of FIG. 1, FIG. 4 is a view showing the first operation performed by the operational unit 100 of FIG. 3, and FIG. 5 is a view showing the second operation performed by the operational unit 100 of FIG. 3.

Referring to FIGS. 1 to 5, the deep neural network training accelerator 11 may include the operational unit 100, the determination unit 200, the control unit 300, an input unit 400, and an output unit 500. Hereinafter, in FIGS. 3 to 5, the same reference numerals denote the same elements in FIGS. 1 and 2, and thus, repetitive descriptions of the operational unit 100, the determination unit 200, and the control unit 300 will be omitted.

The input unit 400 may individually assign each of the input data of the sub-set arbitrarily selected from the total input data to the operational device. In this case, the number of the input data may correspond to the number of the operational devices in a one-to-one correspondence.

In detail, the input unit 400 may receive the total input data and may individually output the input data of the sub-set arbitrarily selected from the total input data to the operational device of the operational unit 100 according to the mini-batch gradient descent (MGD). That is, each operational device may be assigned with different input data from each other.

Then, the operational unit 100 may include, for example, a plurality of operational devices 110_1 to 110_5.

The operational devices 110_1 to 110_5 according to the exemplary embodiment may have a systolic array structure and may sequentially perform the first and second operations. In this case, each of the operational devices 110_1 to 110_5 may be implemented with memory-in-computing.

In detail, the operational devices 110_1 to 110_5 may perform the first operation on the input data assigned through the input unit 400. That is, the operational devices 110_1 to 110_5 may simultaneously perform distributed processing of the first operation on the input data by using a forward propagation algorithm FP.

For example, the forward propagation algorithm FP may correspond to the following Equation 1 of AinW=Aout. In Equation 1, Ain denotes an activation input of the forward propagation algorithm, W denotes a weight, and Aout denotes an activation output.

As shown in FIG. 4, the operational devices 110_1 to 110_5 may output the loss state based on a plurality of confidence matrices obtained by performing the first operation.

In addition, as shown in FIG. 5, the operational devices 110_1 to 110_5 may perform the second operation on the input data after a predetermined time lapses from a time point at which the first operation is performed. That is, the operational devices 110_1 to 110_5 may simultaneously perform distributed processing of the second operation using the backward propagation algorithm BP and the weight update algorithm WU.

For example, the backward propagation algorithm BP may correspond to the following Equation 2 of LinWT=Lout. In Equation 2, Lin denotes the loss input, WT denotes a transposed weight, and Lout denotes the loss output.

In addition, the weight update algorithm WU may correspond to the following Equation 3 of AinLin=. In Equation 3, Ain denotes the activation input in the forward propagation, Lin denotes the loss input, and WG denotes the variation in weight.

According to the exemplary embodiment, the operational unit 100 may initialize any one operational device (e.g., 110_2 and 110_5), which corresponds to the skip data, of the operational devices 110_1 to 110_5 in response to the parallelization control signal.

In this case, the operational unit 100 may reassign the portion of the training data assigned to the other operational devices 110_1, 110_3, and 110_5 to the any one operational device (e.g., 110_2 and 110_5).

Then, the operational unit 100 may process the second operation with respect to the training data in parallel through the operational devices 110_1 to 110_5 after the predetermined time lapses from a time point at which the first operation is performed, and thus, the operation speed may increase. In this case, the any one operational device (e.g., 110_2 and 110_5) may correspond to the parallelized data, which are the portion of the training data, in a one-to-one correspondence in number.

According to the exemplary embodiment, the control unit 300 may reassign the sub-data divided from each of the training data to the operational devices 110_1 to 110_5 according to a data flow. In the present exemplary embodiment, the data flow may refer to a data movement path for reading and storing data.

That is, as the control unit 300 may reassign each of the training data to the operational devices 110_1 to 110_5, the control unit 300 may perform the distributed processing of the second operation on the training data.

Then, the output unit 500 may output the variation in output weight that means a gradient of the sub-set based on each variation in weight output through the operational unit 100. In this case, the variation in output weight may be a sum of each variation in weight.

FIG. 6 is a flowchart showing an operation performed by the deep neural network training accelerator 10 of FIG. 1.

Referring to FIGS. 1 and 6, the operational unit 100 may perform the first operation on the input data of the sub-set according to the mini-batch gradient descent (S110).

Then, the determination unit 200 may determine each of the input data as one of the skip data and the training data based on the confidence matrix obtained by performing the first operation (S120).

Next, the control unit 300 may output the parallelization control signal to the operational unit 100 in response to the skip data determined by the determination unit 200 (S130).

Then, the operational unit 100 may skip the second operation with respect to the skip data and may perform the second operation on the training data based on the parallelization control signal (S140).

FIG. 7 is a flowchart showing the second operation performed by the operational unit 100 of FIG. 5.

Referring to FIGS. 5 and 7, the operational unit 100 may initialize the any one operational device (e.g., 110_2 and 110_5), which corresponds to the skip data, of the operational devices 110_1 to 110_5 based on the parallelization control signal (S210).

In this case, the operational unit 100 may assign the portion of the training data assigned to the other operational devices 110_1, 110_3, and 110_5 to the any one operational device (e.g., 110_2 and 110_5) (S220).

Then, the operational unit 100 may process the second operation with respect to the training data in parallel by using the operational devices 110_1 to 110_5 after the predetermined time lapses from the time point at which the first operation is performed (S230).

Although the exemplary embodiments of the present disclosure have been described, it is understood that the present disclosure should not be limited to these exemplary embodiments but various changes and modifications can be made by one ordinary skilled in the art within the spirit and scope of the present disclosure as hereinafter claimed. Therefore, the disclosed subject matter should not be limited to any single embodiment described herein, and the scope of the present inventive concept shall be determined according to the attached claims.

Claims

1. A deep neural network training accelerator comprising:

an operational unit sequentially performing first and second operations on a plurality of input data of a sub-set according to a mini-batch gradient descent;
a determination unit determining each of the input data as one of skip data and training data based on a confidence matrix obtained by the first operation; and
a control unit controlling the operational unit to skip the second operation with respect to the skip data.

2. The deep neural network training accelerator of claim 1, wherein the operational unit performs the second operation with respect to the training data after a predetermined time lapses from a time point at which the first operation is performed.

3. The deep neural network training accelerator of claim 1, wherein the first operation is a first training stage of the mini-batch gradient descent, which uses a forward propagation algorithm.

4. The deep neural network training accelerator of claim 1, wherein the second operation is a second training stage of the mini-batch gradient descent, which sequentially uses a backward propagation algorithm and a weight update algorithm.

5. The deep neural network training accelerator of claim 1, wherein the determination unit is implemented as a comparator that compares a largest element of the confidence matrix with a predetermined threshold value.

6. The deep neural network training accelerator of claim 5, wherein the comparator outputs a low signal corresponding to the skip data to the control unit when a value of the largest element is equal to or greater than the predetermined threshold value.

7. The deep neural network training accelerator of claim 5, wherein the comparator outputs a high signal corresponding to the training data to the control unit when a value of the largest element is smaller than the predetermined threshold value.

8. The deep neural network training accelerator of claim 6, wherein the control unit parallelizes the second operation with respect to the training data in response to the low signal.

9. The deep neural network training accelerator of claim 6, wherein a number of the low signals is inversely proportional to an operation time of the second operation.

10. The deep neural network training accelerator of claim 1, further comprising:

an input unit assigning each of the input data arbitrarily selected from total input data to the operational unit; and
an output unit summing each variation in weight output through the operational unit to output a variation in output weight corresponding to a gradient of the sub-set.

11. The deep neural network training accelerator of claim 1, wherein the operational unit has a systolic array structure and comprises a plurality of operational devices that sequentially performs the first and second operations.

12. The deep neural network training accelerator of claim 11, wherein the operational unit initializes any one operational device corresponding to the skip data among the operational devices in response to a parallelization control signal applied thereto from the control unit.

13. The deep neural network training accelerator of claim 12, wherein the operational unit reassigns a portion of the training data assigned to the other operational devices among the operational devices to the any one operational device.

14. The deep neural network training accelerator of claim 11, wherein the control unit reassigns a plurality of sub-data divided from each of the training data to the operational devices according to a data flow.

15. The deep neural network training accelerator of claim 14, wherein the data flow refers to a data movement path for reading and storing data.

16. A method of operating a deep neural network training accelerator, comprising:

allowing an operational unit to perform a first operation on a plurality of input data of a sub-set according to a mini-batch gradient descent;
allowing a determination unit to determine the input data as one of skip data and training data based on a confidence matrix obtained by the first operation;
allowing a control unit to output a parallelization control signal to skip a second operation with respect to the skip data in response to the skip data; and
allowing the operational device to skip the second operation with respect to the skip data and to perform the second operation on the training data based on the parallelization control signal.

17. The method of claim 16, wherein the first operation is a first training stage of the mini-batch gradient descent, which uses a forward propagation algorithm.

18. The method of claim 16, wherein the second operation is a second training stage of the mini-batch gradient descent, which sequentially uses a backward propagation algorithm and a weight update algorithm.

19. The method of claim 16, wherein the determination unit is implemented as a comparator that compares a largest element of the confidence matrix with a predetermined threshold value.

20. The method of claim 16, wherein the performing of the second operation comprises:

allowing the operational unit to initialize any one operational device corresponding to the skip data among a plurality of operational devices in response to the parallelization control signal;
allowing the operational unit to reassign a portion of each of the training data assigned to the other operational devices to the any one operational device; and
allowing the operational unit to process the second operation with respect to the training data in parallel using the operational devices after a predetermined time lapses from a time point at which the first operation is performed.
Patent History
Publication number: 20220019897
Type: Application
Filed: Jan 19, 2021
Publication Date: Jan 20, 2022
Applicant: Korea University Research and Business Foundation (Seoul)
Inventors: Jong Sun PARK (Seoul), Dong Yeob SHIN (Seoul), Geon Ho KIM (Gunpo-si), Joong Ho JO (Seoul)
Application Number: 17/151,966
Classifications
International Classification: G06N 3/08 (20060101); G06N 3/063 (20060101); G06F 9/50 (20060101); G06K 9/62 (20060101);