CALCULATING DEVICE AND METHOD FOR A SPARSELY CONNECTED ARTIFICIAL NEURAL NETWORK

Aspects for modifying data in a multi-layer neural network (MNN) acceleration processor for neural networks are described herein. As an example, the aspects may include receiving one or more groups of input data, a predetermined weight value array, and connection data. Further, the aspects may include modifying the weight values included in the predetermined weight value array and the one or more groups of input data based on the connection data. Further still, the aspects may include calculating one or more groups of output data based on the modified weight values and the modified input data.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
BACKGROUND

Artificial Neural Networks (ANNs), or Neural Networks (NNs) for short, are algorithmic mathematical models imitating the behavior characteristics of animal neural networks and performing the distributed concurrent information processing. Depending on complexity of a system, such networks adjust interconnection among a great number of internal nodes, thereby achieving the purpose of information processing. The algorithm used by NNs may be vector multiplication (also referred as “multiplication”) and convolution, which widely adopts sign functions and various approximations thereof.

As neural networks in animal brains, NNs consist of multiple interconnected nodes. As shown in FIG. 3, each block represents a node and each arrow represents a connection between two nodes.

The calculation formula of a neuron can be briefly described as y=f(Σi=0nwi*xi), wherein x represents input data received at all input nodes connected to the output nodes, w represents corresponding weight values between the input nodes and the output nodes, and f(x) is a nonlinear function, usually known as an activation function including those commonly used functions such as

1 1 + e - x and e x - e - x e x + e - x .

NNs are widely applied to a variety of applications, such as computer vision, voice recognition and natural language processing. In recent years, the scale of NNs has been growing. For example, in 1998, Lecun's neural network for handwriting characters recognition includes less than 1 M weight values; while in 2012, Krizhevsky for participating ImageNet competition includes 60 M weight values.

NNs are applications that require large amounts of calculation and great bandwidth for memory access. The more weight values, the more amounts of calculation and memory access are required. In order to decrease the account of calculation and the number of weight values thereby reducing memory access, a sparsely connected neural network may be implemented.

Even as the amount of calculation and the amount of memory access of NNs dramatically increase, a general-purpose processor is conventionally adopted to calculate a sparse artificial neural network. With regard to the general-purpose processor, the input neurons, output neurons and weight values are respectively stored in three arrays, meanwhile there is an index array for storing the connection relation between each output neuron and input neuron connected by weight values. At the time of calculating, a major operation is a multiplication of input data and a weight value. Each calculation needs to search a weight value corresponding to the input data through the index array. Since the general-purpose processor is weak in both calculation and memory access, demands of NNs may not be satisfied. Nevertheless, when multiple general-purpose processors work concurrently, inter-processor communication becomes a performance bottleneck again. In some other aspects, when calculating a neural network after pruning, each multiplication operation needs to re-search positions corresponding to the weight values in the index array, which increases additional calculation amounts and memory access overhead. Thus, NNs calculation is time-consuming and power-consuming. General-purpose processors need to decode an operation of a multiple-layer artificial neural network into a long sequence of operations and memory access instructions, and front-end decoding brings about a larger overhead.

Another known method to support the operations and training algorithms of a sparsely connected artificial neural network is to use a graphics processing unit (GPU). In such method a general-purpose register file and a general-purpose stream processing unit are used to execute a universal Single-instruction-multiple-data (SIMD) instruction to support the aforementioned algorithm. Since a GPU is a device specially designed for executing graph and image operations as well as scientific calculation, it fails to provide specific support for sparse artificial neural network operations. As such, GPUs also need a great amount of front-end decoding to execute sparse artificial neural network operations, thus leading to additional overheads. In addition, since GPU only contains relative small on-chip caching, then model data (e.g., weight values) of a multiple-layer artificial neural network has to be repeatedly retrieved from outside the chip. Thus, off-chip bandwidth becomes a main performance bottleneck while producing huge power consumption.

SUMMARY

The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.

The present disclosure presents examples of techniques for modifying data in an MNN acceleration processor for neural networks. An example apparatus may include a data modifier configured to receive one or more groups of input data. The one or more groups of input data may be stored as input elements in an input array and each of the input elements may be identified by an input array index. The data modifier may be further configured to receive a predetermined weight value array that includes one or more weight values for calculating one or more groups of output data based on the one or more groups of input data. The one or more groups of output data may be stored as output elements in an output array and each of the output elements may be identified by an output array index. Further still, the data modifier may be configured to receive connection data that include one or more connection values. Each of the connection values may correspond to one of the input array indexes and one of the output array indexes and may indicate whether one of the weight values in the predetermined weight value array is designated for calculating a group of the output data to be stored as the output element identified by the corresponding output array index based on a group of the input data stored as the input element identified by the corresponding input array index, and whether the weight value meets a predetermined condition. The data modifier may be further configured to modify the weight values and the input data based on the connection data. In addition, the example apparatus may include a computing unit configured to receive the modified weight values and the modified input data from the data modifier and calculate the one or more groups of output data based on the modified weight values and the modified input data.

An example method for modifying data in an MNN acceleration processor for neural networks may include receiving one or more groups of input data. The one or more groups of input data may be stored as input elements in an input array and each of the input elements may be identified by an input array index. Further, the example method may include receiving a predetermined weight value array that includes one or more weight values for calculating one or more groups of output data based on the one or more groups of input data. The one or more groups of output data may be stored as output elements in an output array and each of the output elements may be identified by an output array index. Further still, the example method may include receiving connection data that include one or more connection values. Each of the connection values may correspond to one of the input array indexes and one of the output array indexes and indicate whether one of the weight values in the predetermined weight value array is designated for calculating a group of the output data to be stored as the output element identified by the corresponding output array index based on a group of the input data stored as the input element identified by the corresponding input array index, and whether the weight value meets a predetermined condition. In addition, the example method may include modifying the weight values and the input data based on the connection data and calculating the one or more groups of output data based on the modified weight values and the modified input data.

To the accomplishment of the foregoing and related ends, the one or more aspects comprise the features herein after fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative features of the one or more aspects. These features are indicative, however, of but a few of the various ways in which the principles of various aspects may be employed, and this description is intended to include all such aspects and their equivalents.

BRIEF DESCRIPTION OF THE DRAWINGS

The disclosed aspects will hereinafter be described in conjunction with the appended drawings, provided to illustrate and not to limit the disclosed aspects, wherein like designations denote like elements, and in which:

FIG. 1 is a block diagram illustrating an example computing process at an MNN acceleration processor for neural networks;

FIG. 2 is a block diagram illustrating an example computer system in which data modification for neural networks may be implemented;

FIG. 3 is a diagram illustrating a comparison between a regular MNN and a sparse MNN in which data modification for neural networks may be implemented;

FIG. 4A and FIG. 4B are diagrams illustrating one or more connection values in a sparse MNN in which data modification for neural networks may be implemented;

FIG. 5 is a diagram illustrating a convolution process with which data modification for neural networks may be implemented;

FIG. 6 is a diagram illustrating a convolution process with modified weight values with which data modification for neural networks may be implemented;

FIG. 7 is a block diagram illustrating an example MNN acceleration processor in which data modification for neural networks may be implemented;

FIG. 8 is a block diagram illustrating another example MNN acceleration processor in which data modification for neural networks may be implemented;

FIG. 9 is a block diagram illustrating an example data modifier by which data modification for neural networks may be implemented;

FIG. 10 is a flow chart of aspects of an example method for modifying data for neural networks;

FIG. 11 is a block diagram illustrating another example MMN acceleration processor in which data modification for neural networks may be implemented;

FIG. 12 is a block diagram illustrating another example data modifier by which data modification for neural networks may be implemented;

FIG. 13 is a flow chart of aspects of another example method for modifying data for neural networks;

FIG. 14 is a block diagram illustrating another example MMN acceleration processor in which data modification for neural networks may be implemented;

FIG. 15 is a block diagram illustrating another example data modifier by which data modification for neural networks may be implemented; and

FIG. 16 is a flow chart of aspects of another example method for modifying data for neural networks.

DETAILED DESCRIPTION

Various aspects are now described with reference to the drawings. In the following description, for purpose of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more aspects. It may be evident, however, that such aspect(s) may be practiced without these specific details.

A typical conceptual model of a multi-layer neural network (MNN) may include multiple layers of neurons. Each neuron is an information-processing unit that is fundamental to the operation of a neural network. In more detail, a typical model of a neuron may include three basic elements, e.g., a set of synapses, an adder, and an activation function. In a form of a mathematical formula, the output signals of a neuron may be represented as yk=φ(Σj=1mwkjxj+bk), in which yk represents the output signals of the neuron, φ( ) represents the activation function, wkj represents one or more weight values, xj represents the input signals of the neuron, and bk represents a bias value. In other words, a simplified model of a neuron may include one or more input nodes for receiving the input signals or data and an output node for transmitting the output signals or data to an input node of another neuron at the next level. Thus, a layer of neurons may at least include a layer of multiple input nodes and another layer of output nodes.

FIG. 1 is a block diagram illustrating an example computing process 100 at an MNN acceleration processor for neural networks. As depicted, the example computing process 100 may be performed by a layer of input nodes 102, a layer of output nodes 104, a layer of input nodes 106, and a layer of output nodes 108. A triangular-shaped operator (Δ as shown in FIG. 1) may indicate a matrix multiplication or a convolution operation. It is notable that the layers of input nodes and output nodes may not be the first layer and the last layer of the entire neural network in the process. Rather, the layers of input and output nodes may refer to the nodes included in any two consecutive layers of neurons of a neural network. As described below in greater detail, the computing process from the layers of input nodes 102 to the layer of output nodes 108 may be referred to as a forward propagation process; the computing process from the layer of output nodes 108 to the layer of input nodes 102 may be referred to as a backward propagation process.

The forward propagation process may start from one or more input nodes that receive input data 102A. The received input data 102A may be multiplied or convolved by one or more weight values 102C. The results of the multiplication or convolution may be transmitted to one or more output nodes at the layer of output nodes 104 as output data 104A. The output data 104A, with or without further operations, may be transmitted to one or more input nodes at the next layer (e.g., the layer of input nodes 106) as input data 106A. Similarly, the input data 106A may be multiplied or convolved by one or more weight values 106C. The results of the multiplication or convolution may be similarly transmitted to one or more output nodes at the layer of output nodes 108 as output data 108A.

The backward propagation process may start from one or more output nodes at the last layer of nodes of the forward propagation process (e.g., the layer of output nodes 108). For example, output gradients 108B generated at the layer of output nodes 108 may be multiplied or convolved by the input data 106A to generate weight gradients 106D at the layer of input nodes 106. The output gradients 108B may be further multiplied or convolved by the weight values 106C to generated input data gradients. The input data gradients 106B, with or without other operations between layers, may be transmitted to one or more nodes at the layer of output nodes 104 as output gradients 104B. The output gradients 104B may then be multiplied or convolved by the input data 102A to generate weight gradients 102D. Additionally, the output gradients 104B may be multiplied by the weight values 102C to generate input data gradients 102B.

FIG. 2 is a block diagram illustrating an example computer system 200 in which data modification for neural networks may be implemented. The example computer system 200 may include at least an I/O interface 202, a central processing unit (CPU) 204, a multi-layer neural network acceleration processor 206, and a memory 208. The I/O interface 202 may be configured to exchange data or information with peripheral devices, e.g., input devices, storage devices, etc. Data received from the I/O interface 202 may be further processed at the CPU 204. Data that require processing at an MMN may be transmitted to the MNN acceleration processor 206. For example, the forward propagation process and the backward propagation process described above in accordance with FIG. 1 may be performed at the MNN acceleration processor 206. Other data for the forward propagation process and the backward propagation process, e.g., weight values 102C and 106C, may be retrieved from the memory 208 and stored on the MNN acceleration processor 206 during the processes. However, as discussed above, the index array that indicates the correspondence between the input data and the weight values is conventionally stored on the memory 208. At each multiplication or convolution that involves the weight values, retrieving the index array from the memory 208 may cause significant system delays or bandwidth consumption. The MNN acceleration processor 206 may be described in further detail below.

FIG. 3 is a diagram illustrating a comparison between a regular MNN 300A and a sparse MNN 300B in which data modification for neural networks may be implemented. As depicted, the regular MNN 300A may include a layer of input nodes 302 and a layer of output nodes 304. Each block shown in the regular MNN 300A indicates an input node or an output node. The arrows between the input nodes (e.g., i1, i2, i3 . . . iN) and the output nodes (e.g., o1, o2, o3 . . . oN) indicate those non-zero weight values for calculating the output data. For example, w11 may be the weight value for calculating the output data at output node o1 based on the input data received at input node i1. However, in some applications of neural networks, more than one of the weight values may be zero, in which case input data received at more than one input nodes are not considered for calculating some output data. In these cases, the arrows between corresponding input nodes and output nodes will be deleted and the MNN may be referred to as a sparse MNN, e.g., sparse MNN 300B. As shown in sparse MNN 300B, no arrow is between i2 and o1, i1 and o2, and i4 and o2, which indicates that the weight values, w21, w12, and w42 are zero.

FIG. 4A and FIG. 4B are diagrams illustrating one or more connection values in a sparse MNN in which data modification for neural networks may be implemented. As discussed above, an index array that indicates the correspondence between the weight values and the input data is conventionally stored in the memory 208. With respect to sparse MNNs, connection data that indicate the correspondence between the output data and the input data may be generated and transmitted to MNN acceleration processor 206.

As depicted in FIGS. 4A and 4B, one or more groups of input data may be received at the input nodes i1, i2, i3, and i4. In other words, input data may be received and stored in a form of input array that includes elements identified by array indexes i1, i2, i3, and i4. Similarly, one or more groups of output data may be generated at output nodes o1 and o2. That is, the output data may be stored and transmitted in a form of output array that include elements identified by array indexes o1 and o2. As an example of a sparse MNN, some input nodes are not connected to the output nodes.

Connection data including one or more connection values may be generated based on the weight values corresponding to an output node and an input node. That is, if a weight value meets a predetermined condition, a connection value for the corresponding output node and input node may be set to one. Otherwise, if a weight value corresponding to the output node and input node is zero, or the weight value does not meet the predetermined condition, the connection value for the corresponding output node and input node may be set to zero. In some examples, the predetermined condition may include that, the weight value is a non-zero number, an absolute value of the weight value is less than or equal to a first threshold value, and/or the absolute value of the weight value is less than or equal to a second threshold value but greater than or equal to a third threshold value. The first, second, and third threshold values may be received from the peripheral devices via the I/O interface 202.

For example, the weight values for calculating output data at output node o1 may include w11, w21, w31, and w41, which respective corresponds to the input data received at input nodes i2, i3, and i4. The weight values (w11, w21, w31, and w41) may be 0.5, 0, 0.6, and 0.8 and the predetermined condition may be that a weight value is greater than zero but less than 0.99. Thus, weight values w11, w31, and w41 meet the predetermined condition but w21 does not. As such, the connection values for i1 and o1, i3 and o1, i4 and o1 may be set to 1 and the connection value for i2 and o1 may be set to zero. Similarly, the connection values for i1 and o2 and i4 and o2 may be set to zero and the connection values for i2 and o2 and i3 and o2 may be set to one. Thus, the connection values for of may be determined and stored to be (1, 0, 1, 1) and the connection values for o2 may be determined to be (0, 1, 1, 0). In some examples, the connection values may be stored in a form of a linked list or a multi-dimensional dynamic array.

In other examples (e.g., illustrated in FIG. 4B), connection values may be generated based on a distance between the input nodes. A connection value may be determined by the distances between different input nodes that correspond to those weight values that meet the predetermined condition. With respect to the above example weight values, w11, w31, and w41 meet the predetermined condition. The connection value for input node i1 may be set to a value equal to the distance between the first input node and the current input node. Thus, since the distance between input node i1 and the first node (also i1 here) is zero, the connection value for i1 may be set to zero. With respect to input node i3, since the distance between input node i3 and the first input node (i1) is 2, the connection value for i3 may be set to 2. It is notable that the illustration and the term “distance” are provided for purpose of brevity. Since the input data and the output data may be stored in a form of data array, the term “distance” may refer to the difference between array indexes.

Thus, as the connection values sufficiently represent the connections between the input nodes and the output nodes, the MNN acceleration processor 206 is not required to retrieve the index array from the memory 208 during the forward propagation process and the backward propagation process described in FIG. 1.

FIG. 5 is a diagram illustrating a convolution process with which data modification for neural networks may be implemented. In this example, an example convolution process between one or more groups of input data in a form of an input matrix

1 0 1 0 1 1 0 0 1

and weight values in a form of a weight matrix

1 1 1 0

is described. As shown, each element of the output matrix is calculated by convolving a portion of the input matrix with the weigh matrix. For example, the output data at the output node o1 may be calculated by convolving the top left portion of the input matrix

( i . e . , 1 0 0 1 )

by the weight matrix. The result of the convolution process may be stored in an output matrix

( e . g . , 1 2 1 2 as shown ) .

FIG. 6 is a diagram illustrating a convolution process with sparse weight matrix with which data modification for neural networks may be implemented. As depicted, the top part of FIG. 6 shows a convolution process between an input matrix and a weight matrix. The lower part of FIG. 6 shows a convolution process between the input matrix and a sparse weight matrix. In the sparse weight matrix, weight values w2 and w3 are deleted. Thus, rather than four times of convolution operations, it only requires two convolution operations to generate the output matrix. Specifically, the connection values w11, and w41, w21, w31, may be set to (1, 0, 0, 1) or (0, 2) for the calculation of output data at output nodes o1 and o4.

FIG. 7 is a block diagram illustrating an example MNN acceleration processor 206 in which data modification for neural networks may be implemented. As depicted, MNN acceleration processor 206 may at least include a data modifier 702 configured to receive one or more groups of input data and a predetermined weight value array that includes one or more weight values. As described above, the one or more groups of input data may be stored in a form of data array (“input array” hereinafter); that is, each group of the input data may be stored as an element of the input array (“input element” hereinafter). Each input element may be identified by an array index (“input array index” hereinafter; e.g., i1, i2, i3, and i4). Each of the weight values may be designated for calculating a group of output data at an output node (e.g., o1) based on a respective group of input data (e.g., a group of input data received at the input node i1). The calculated output data may be similarly stored in a form of data array (“output array” hereinafter); that is, each group of the output data may be stored as an element of the output array (“output element” hereinafter). Each output element may be identified by an array index (“output array index” hereinafter; e.g., o1 and o2).

The data modifier 702 may be configured to further receive connection data that include the one or more aforementioned connection values. Each of the connection values may correspond to an input array index (e.g., i2) and an output array index (e.g., o1).

Further, the data modifier 702 may be configured to modify the input data and the weight values based on the connection values. In some aspects, the data modifier 702 may be configured to operate in a work mode to delete one or more weight values or one or more groups of the input data (“pruning mode”). Additionally, the data modifier 702 may be configured to operate in another work mode to add one or more zero values to the predetermined weight value array or the input data (“compensation mode”). The selection between the deletion mode or the compensation mode may be predetermined as a system parameter or according to other algorithms prior to the receiving of the input data.

In a specific example, the data modifier 702 may receive an input array including groups of input data (0.5, 0.6, 0.7, 1.2, 4, 0.1), an array of connection values (1, 0, 0, 1, 1, 1), a predetermined weight value array including weight values (0.5, 0.8, 0.9, 0.4). Conventionally, when a processor performs multiplication or convolution operations on the six-element input array and the four-element weight array, the processor retrieves the index array from the memory 208 to determine which four elements of the input array should be multiplied or convolved by the four elements in the weight array. The retrieving of the index array, as previously discussed, likely causes bandwidth consumption.

In this example, the data modifier 702 may be configured to operate in the pruning mode. That is, since the second and the third connection values are zeroes, the data modifier 702 may be configured to delete the corresponding groups of the input data, i.e., the second and the third groups of the input data (0.6 and 0.7). The modified input data may be stored as an array including elements (0.5, 1.2, 4, 0.1). The modified input data may then be transmitted to a direct memory access (DMA) module 704. Alternatively, the modified input data may be transmitted to and stored at the memory 208 for future processing.

In another specific example where the data modifier 702 operates in the pruning mode, the data modifier 702 may receive groups of input data in an input array (0.5, 1.2, 4, 0.1), a predetermined weight value array including weight values (0.5, 0, 0, 0.8, 0.9, 0.4), and the same array of connection values. Since the second and the third connection values are zeroes, the data modifier 702 may be configured to delete the corresponding weight values from the predetermined weight value array. That is, the second and the third weight values in the predetermined weight value array. The modified weight value array may be stored as an array including elements (0.5, 0.8, 0.9, 0.4). Similarly, the modified weight value array may be transmitted to the DMA module 704 or to the memory 208.

In some other examples, the data modifier 702 may be configured to operate in the compensation mode. For example, the data modifier 702 may receive an input array including elements (0.5, 1.2, 4, 0.1), a predetermined weight value array including weight values (0.5, 0, 0, 0.8, 0.9, 0.4), and the same connection data including connection values (1, 0, 0, 1, 1, 1). Since the second and the third connection values are zeroes, the data modifier 702 may be configured to add two elements of zero value to the input array to be the second and the third elements of the input array generating a modified input array including elements (0.5, 0, 0, 1.2, 4, 0.1). For the same reason stated above, a processor that performs multiplication or convolution operations on the modified input array and the predetermined weight value array is not required to retrieve the index array from the memory 208 and, thus, bandwidth consumption may be reduced.

In another example where the data modifier 702 operates in the compensation mode, the data modifier 702 may receive an input array including elements (0.5, 0.6, 0.7, 1.2, 4, 0.1), a predetermined weight value array including elements (0.5, 0.8, 0.9, 0.4), and the same connection data including connection values (1, 0, 0, 1, 1, 1). Since the second and the third connection values are zeroes, the data modifier 702 may be configured to add two elements of zero value to be the second and the third elements of the predetermined weight value array generating a modified weight value array including elements (0.5, 0, 0, 0.8, 0.9, 0.4).

The modified input data and/or the modified weight values may be transmitted to and temporarily stored in an input data cache 712 and/or a weight cache 714. The input data cache 712 and weight cache 714 may refer to one or more high-speed storage devices incorporated within the MNN acceleration processor 206 and configured to store the input data and the weight values respectively. The modified input data and/or the modified weight values may be further transmitted to a computing unit 710 for further processing.

MNN acceleration processor 206 may further include an instruction cache 706 and a controller unit 708. The instruction cache 706 may refer one or more storage devices configured to store instructions received from the CPU 204. The controller unit 708 may be configured to read the instructions from the instruction cache 706 and decode the instructions.

Upon receiving the decoded instructions from the controller unit 708, the modified input data from the input data cache 712, and the modified weight values from the weight cache 714, the computing unit 710 may be configured to calculate one or more groups of output data based on the modified weight values and the modified input data. In some aspects, the calculation of the output data may include the forward propagation process and the backward propagation process described in accordance with FIG. 1.

The computing unit 710 may further include one or more multipliers configured to multiply the modified input data by the modified weight values to generate one or more weighted input data, one or more adders configured to add the one or more weighted input data to generate a total weighted value and add a bias value to the total weighted value to generate a biased value, and an activation processor configured to perform an activation function on the biased value to generate a group of output data.

The generated output data may be temporarily stored in an output data cache 716 and may be further transmitted to the memory 208 via the DMA module 704.

FIG. 8 is a block diagram illustrating another example MNN acceleration processor 206 in which data modification for neural networks may be implemented. As depicted, components in the example MNN acceleration processor 206 may be the same or similar to the corresponding components shown in FIG. 7 or may be configured to perform the same or similar operations to those shown in FIG. 7 except that a data modifier 802 may be implemented between a DMA module 804, an input data cache 812, and a weight cache 814.

The data modifier 802, similar to the data modifier 702, may be configured to modify the input data and the weight values based on the connection values. The modified input data and the modified weight values may be transmitted to an input data cache 812 and a weight cache 814 and may be further transmitted to a computing unit 810 for further processing.

FIG. 9 is a block diagram illustrating an example data modifier 702/802 by which data modification for neural networks may be implemented. As depicted, the data modifier 702/802 may include an input data modifier 902 and a weight modifier 904.

Depending on the operation mode, the input data modifier 902 may be configured to modify the input data. When operates in the pruning mode, the input data modifier 902 may be configured to delete groups of input data that correspond to the connection values that are zeroes. When operates in the compensation mode, the input data modifier 902 may be configured to add one or more zeroes to be the elements corresponding to the connection values that are zeroes.

Similarly, the weight modifier 904 may be configured to modify the weight values based on different operation mode. When operates in the pruning mode, the weight modifier 904 may be configured to delete weight values that correspond to the connection values that are zeroes. When operates in the compensation mode, the weight modifier 904 may be configured to add one or more zeroes to be the elements corresponding to the connection values that are zeroes.

In some aspects, the input data modifier 902 and the weight modifier 904 may be implemented by one or more multiplexers and at least one storage device configured to store information indicating the current operation mode.

In a non-limiting example illustrated in FIG. 9, the input data modifier 902 may include an input data filter 906 and an input data multiplexer 908. The input data filter 906 may be configured to output an input element if a connection value corresponding to the input element is 1. Further, when the connection value is 0, the input data filter 906 may be configured to ignore the corresponding input element and move to process the next input element. The input data multiplexer 908 may be configured to output data from the input data filter 906 when in the pruning mode and to directly output the input data when in the compensation mode. As such, those input elements corresponding to the connection values of zero may be deleted when the input data modifier 902 is configured to work in the pruning mode.

Further to the above non-limiting example, the weight modifier 904 may include a first level weight multiplexer 910 and a second level weight multiplexer 912. The first level weight multiplexer 910 may be configured to output a zero value if a corresponding connection value is 0 and to output a weight value corresponding to the connection value if the connection value is 1. The second level weight multiplexer 912 may be configured to output data received from the first level weight multiplexer 910 when in the compensation mode. Further, the second level weight multiplexer 912 may be configured to directly output a corresponding weight value when in the pruning mode. As such, additional elements of zero values may be added to the weight value array when the weight modifier 904 is configured to work in the compensation mode.

FIG. 10 is a flow chart of aspects of an example method 1000 for modifying data for neural networks. The example method 1000 may be performed by one or more components of the MNN acceleration processor 206 as described in FIGS. 7 and 8 and the components of the data modifier 702/802 as described in FIG. 9.

At block 1002, method 1000 may include the data modifier 702/802 receiving one or more groups of input data, wherein the one or more groups of input data are stored as input elements in an input array and each of the input elements is identified by an input array index.

Further, method 1000 may include the data modifier 702/802 receiving a predetermined weight value array that includes one or more weight values for calculating one or more groups of output data based on the one or more groups of input data, wherein the one or more groups of output data are to be stored as output elements in an output array and each of the output elements is identified by an output array index.

Further still, method 1000 may include the data modifier 702/802 receiving connection data that include one or more connection values, wherein each of the connection values corresponds to one of the input array indexes and one of the output array indexes and indicates whether one of the weight values in the predetermined weight value array is designated for calculating a group of the output data to be stored as the output element identified by the corresponding output array index based on a group of the input data stored as the input element identified by the corresponding input array index, and whether the weight value meets a predetermined condition.

At block 1004, method 1000 may include the data modifier 702/802 modifying the weight values and the input data based on the connection data. In some aspects, the modifying may further includes sub-processes or sub-operations including deleting at least one weight values that correspond to the connection values that are zero, adding one or more zero values to the predetermined weight value array based on the connection values, deleting at least one groups of the input data that are stored as the input elements identified by the input array indexes corresponding to the connection values that are zero, or adding one or more zero values to the input elements identified by the input array indexes corresponding to the connection values that are zero.

In a specific example, the data modifier 702 may receive an input array including groups of input data (0.5, 0.6, 0.7, 1.2, 4, 0.1), an array of connection values including elements (1, 0, 0, 1, 1, 1), a predetermined weight value array including weight values (0.5, 0.8, 0.9, 0.4). In this example, the data modifier 702 may be configured to operate in the pruning mode. That is, since the second and the third connection values are zeroes, the data modifier 702 may be configured to delete the corresponding groups of the input data, i.e., the second and the third groups of the input data (0.6 and 0.7). The modified input data may be stored as an array including elements (0.5, 1.2, 4, 0.1).

In another specific example where the data modifier 702 operates in the pruning mode, the data modifier 702 may receive groups of input data in an input array (0.5, 1.2, 4, 0.1), a predetermined weight value array including weight values (0.5, 0, 0, 0.8, 0.9, 0.4), and the same array of connection values. Since the second and the third connection values are zeroes, the data modifier 702 may be configured to delete the corresponding weight values from the predetermined weight value array. That is, the second and the third weight values in the predetermined weight value array. The modified weight value array may be stored as an array including elements (0.5, 0.8, 0.9, 0.4).

At block 1006, method 1000 may include the computing unit 710/810 calculating the one or more groups of output data based on the modified weight values and the modified input data. That is, the computing unit 710 may be configured to calculating one or more groups of output data based on the modified weight values and the modified input data. In some aspects, the computing unit 710 may further include one or more multipliers configured to multiply the modified input data by the modified weight values to generate one or more weighted input data, one or more adders configured to add the one or more weighted input data to generate a total weighted value and add a bias value to the total weighted value to generate a biased value, and an activation processor configured to perform an activation function on the biased value to generate a group of output data.

FIG. 11 is a block diagram illustrating another example MMN acceleration processor 206 in which data modification for neural networks may be implemented. As depicted, components in the example MNN acceleration processor 206 may be the same or similar to the corresponding components shown in FIG. 7 or may be configured to perform the same or similar operations to those shown in FIG. 7 except that a data modifier 1102 may be implemented between a DMA module 1104 and an input data cache 1112. For example, the DMA module 1104 may be configured to transmit and receive data from and to the memory 208, an instruction cache 1106, the data modifier 1102, a weight cache 1114, and an output data cache 1116. The instruction cache 1106, the input data cache 1112, the weight cache 1114, and the output data cache 1116 may respectively refer to one or more high-speed storage devices incorporated within the MNN acceleration processor 206 and configured to respectively store instructions from the DMA module 1104, the modified input data from the data modifier 1102, weight values from the DMA module 1104, and the calculated output data from a computing unit 1110.

In this example, the data modifier 1102 may be configured to receive one or more groups of input data for generating one or more groups of output data. The one or more groups of input data may be stored as input elements in an input array and each of the input elements is identified by an input array index. The data modifier 1102 may be further configured to receive connection data that include one or more connection values. In this example, unlike the data modifier 702/802, the data modifier 1102 is not configured to receive the weight values as the weight values are directly transmitted from the DMA module 1104 to the weight cache 1114.

Upon receiving the input data and the connection data, the data modifier 1102 may be configured to modify the received groups of input data based on the connection data. For example, the data modifier 1102 may be configured to receive an input array including groups of input data as elements (0.5, 0.6, 0.7, 1.2, 4, 0.1) and an array of connection values (1, 0, 0, 1, 1, 1). When the data modifier 1102 operates in the pruning mode, the data modifier 1102 may be configured to delete the corresponding groups of the input data, i.e., the second and the third groups of the input data (0.6 and 0.7). The modified input data may be stored as an array including elements (0.5, 1.2, 4, 0.1).

In some other aspects, the data modifier 1102 may operate in the compensation mode. For example, the data modifier 1102 may receive an input array including elements (0.5, 1.2, 4, 0.1) and the same connection data including connection values (1, 0, 0, 1, 1, 1). Since the second and the third connection values are zeroes, the data modifier 1102 may be configured to add two elements of zero value to the input array to be the second and the third elements of the input array generating a modified input array including elements (0.5, 0, 0, 1.2, 4, 0.1).

In this example, the modified input data may be transmitted to and temporarily stored at the input data cache 1112. The modified input data may be further transmitted, together with the weight values from the weight cache 1114 and the decoded instructions from the controller unit 1108, to the computing unit 1110. The computing unit 1110 may be configured to calculate one or more groups of output data based on the weight values and the modified input data. In some aspects, the calculation of the output data may include the forward propagation process and the backward propagation process described in accordance with FIG. 1.

Similar to the computing unit 710, the computing unit 1110 may include one or more multipliers configured to multiply the modified input data by the weight values to generate one or more weighted input data, one or more adders configured to add the one or more weighted input data to generate a total weighted value and add a bias value to the total weighted value to generate a biased value, and an activation processor configured to perform an activation function on the biased value to generate a group of output data. The generated output data may be temporarily stored in the output data cache 1116 and may be further transmitted to the memory 208 via the DMA module 1104.

FIG. 12 is a block diagram illustrating another example data modifier 1102 by which data modification for neural networks may be implemented. As the data modifier 1102 may be configured to only modify the input data, the data modifier 1102 may only include an input data modifier 1202. The dash-lined block indicates an optional weight modifier 904.

Similar to the input data modifier 902, the input data modifier 1202 may be configured to modify the input data depending on the operation mode. When operates in the pruning mode, the input data modifier 1202 may be configured to delete groups of input data that correspond to the connection values that are zeroes. When operates in the compensation mode, the input data modifier 1202 may be configured to add one or more zeroes to be the elements corresponding to the connection values that are zeroes.

In some aspects, the input data modifier 1202 may be implemented by one or more multiplexers and at least one storage device configured to store information indicating the current operation mode.

In a non-limiting example illustrated in FIG. 12, the input data modifier 1202 may include an input data filter 1206 and an input data multiplexer 1208. The input data filter 1206 may be configured to output an input element if a connection value corresponding to the input element is 1. Further, when the connection value is 0, the input data filter 1206 may be configured to ignore the corresponding input element and move to process the next input element. The input data multiplexer 1208 may be configured to output data from the input data filter 1206 when in the pruning mode and to directly output the input data when in the compensation mode. As such, those input elements corresponding to the connection values of zero may be deleted when the input data modifier 1202 is configured to work in the pruning mode.

FIG. 13 is a flow chart of aspects of another example method 1300 for modifying data for neural networks. The example method 1300 may be performed by one or more components of the MNN acceleration processor 206 as described in FIG. 11 and the component of the data modifier 1102 as described in FIG. 12.

At the block 1302, method 1300 may include the data modifier 1102 receiving one or more groups of input data for generating one or more groups of output data. As previously described, the one or more groups of input data may be stored as input elements in an input array and each of the input elements is identified by an input array index. Method 1300 may further include the data modifier 1102 receiving connection data that include one or more connection values.

At the block 1304, method 1300 may include the data modifier 1102 modifying the received groups of input data based on the connection data. In some aspects, the modifying may further include sub-processes or sub-operations including deleting at least one groups of the input data that are stored as the input elements identified by the input array indexes corresponding to the connection values that are zero when the data modifier 1102 operates in the pruning mode. In some other aspects, the modifying may include adding one or more zero values to the input elements identified by the input array indexes corresponding to the connection values that are zero when the data modifier 1102 operates in the compensation mode.

In a specific example, the data modifier 1102 may receive an input array including groups of input data as elements (0.5, 0.6, 0.7, 1.2, 4, 0.1) and an array of connection values (1, 0, 0, 1, 1, 1). When the data modifier 1102 operates in the pruning mode, the data modifier 1102 may be configured to delete the corresponding groups of the input data, i.e., the second and the third groups of the input data (0.6 and 0.7). The modified input data may be stored as an array including elements (0.5, 1.2, 4, 0.1).

In another example, the data modifier 1102 may operate in the compensation mode. For example, the data modifier 1102 may receive an input array including elements (0.5, 1.2, 4, 0.1) and the same connection data including connection values (1, 0, 0, 1, 1, 1). Since the second and the third connection values are zeroes, the data modifier 1102 may be configured to add two elements of zero value to the input array to be the second and the third elements of the input array generating a modified input array including elements (0.5, 0, 0, 1.2, 4, 0.1).

At the block 1306, method 1300 may include the computing unit 1110 calculating the one or more groups of output data based on the weight values and the modified input data. In some aspects, the computing unit 1110 may include one or more multipliers configured to multiply the modified input data by the weight values to generate one or more weighted input data, one or more adders configured to add the one or more weighted input data to generate a total weighted value and add a bias value to the total weighted value to generate a biased value, and an activation processor configured to perform an activation function on the biased value to generate a group of output data.

FIG. 14 is a block diagram illustrating another example MMN acceleration processor 206 in which data modification for neural networks may be implemented. As depicted, components in the example MNN acceleration processor 206 may be the same or similar to the corresponding components shown in FIG. 7 or may be configured to perform the same or similar operations to those shown in FIG. 7 except that a data modifier 1402 may be implemented between a DMA module 1404 and a weight cache 1414. For example, the DMA module 1404 may be configured to transmit and receive data from and to the memory 208, an instruction cache 1406, the data modifier 1402, an input data cache 1412, and an output data cache 1416. The instruction cache 1406, the input data cache 1412, the weight cache 1414, and the output data cache 1416 may respectively refer to one or more high-speed storage devices incorporated within the MNN acceleration processor 206 and configured to respectively store instructions from the DMA module 1404, the input data from the DMA module 1404, the modified weight values from the data modifier 1402, and the calculated output data from a computing unit 1410.

In this example, the data modifier 1402 may be configured to receive a predetermined weight value array that includes one or more weight values for calculating one or more groups of output data based on one or more groups of input data. The one or more groups of input data may be stored as input elements in an input array and each of the input elements is identified by an input array index. The one or more groups of output data are to be stored as output elements in an output array and each of the output elements is identified by an output array index. The data modifier 1402 may be further configured to receive connection data that include one or more connection values. In this example, unlike the data modifier 702/802, the data modifier 1402 is not configured to receive the input data as the input data may be directly transmitted from the DMA module 1404 to the input data cache 1412.

Upon receiving the weight values and the connection data, the data modifier 1402 may be configured to modify the weight values based on the connection data. For example, the data modifier 1402 may receive a predetermined weight value array including weight values (0.5, 0, 0, 0.8, 0.9, 0.4) and an array of connection values (1, 0, 0, 1, 1, 1). Since the second and the third connection values are zeroes, the data modifier 1402 may be configured to delete the corresponding weight values from the predetermined weight value array. That is, the second and the third weight values in the predetermined weight value array. The modified weight value array may be stored as an array including elements (0.5, 0.8, 0.9, 0.4).

In another example, the data modifier 1402 may receive a predetermined weight value array including elements (0.5, 0.8, 0.9, 0.4) and the same connection data including connection values (1, 0, 0, 1, 1, 1). Since the second and the third connection values are zeroes, the data modifier 1402 may be configured to add two elements of zero value to be the second and the third elements of the predetermined weight value array generating a modified weight value array including elements (0.5, 0, 0, 0.8, 0.9, 0.4).

The modified weight values may be transmitted to and temporarily stored at the weight cache 1414. The modified weight values may be further transmitted, together with the input data from the input data cache 1412 and the decoded instructions from the controller unit 1408, to the computing unit 1410. The computing unit 1410 may be further configured to calculate one or more groups of output data based on the modified weight values and the input data. In some aspects, the calculation of the output data may include the forward propagation process and the backward propagation process described in accordance with FIG. 1.

Similar to the computing unit 710, the computing unit 1410 may include one or more multipliers configured to multiply the input data by the modified weight values to generate one or more weighted input data, one or more adders configured to add the one or more weighted input data to generate a total weighted value and add a bias value to the total weighted value to generate a biased value, and an activation processor configured to perform an activation function on the biased value to generate a group of output data. The generated output data may be temporarily stored in the output data cache 1416 and may be further transmitted to the memory 208 via the DMA module 1404.

FIG. 15 is a block diagram illustrating another example data modifier by which data modification for neural networks may be implemented. As the data modifier 1402 may be configured to only modify the weight values, the data modifier 1402 may only include a weight modifier 1504. The dash-lined block indicates an optional input data modifier 902.

Similar to the weight modifier 904, the weight modifier 1504 may be configured to modify the input data depending on the operation mode. When operates in the pruning mode, the weight modifier 1504 may be configured to delete weight values that correspond to the connection values that are zeroes. When operates in the compensation mode, the weight modifier 1504 may be configured to add one or more zeroes to be the elements corresponding to the connection values that are zeroes.

In some aspects, the weight modifier 1504 may be implemented by one or more comparators and at least one storage device configured to store information indicating the current operation mode.

In a non-limiting example illustrated in FIG. 15, the weight modifier 1504 may include a first level weight multiplexer 1506 and a second level weight multiplexer 1508. The first level weight multiplexer 1506 may be configured to output a zero value if a corresponding connection value is 0 and to output a weight value corresponding to the connection value if the connection value is 1. The second level weight multiplexer 1508 may be configured to output data received from the first level weight multiplexer 1506 when in the compensation mode. Further, the second level weight multiplexer 1508 may be configured to directly output a corresponding weight value when in the pruning mode. As such, additional elements of zero values may be added to the weight value array when the weight modifier 1504 is configured to work in the compensation mode.

FIG. 16 is a flow chart of aspects of another example method for modifying data for neural networks. The example method 1600 may be performed by one or more components of the MNN acceleration processor 206 as described in FIG. 14 and the component of the data modifier 1402 as described in FIG. 15.

At block 1602, method 1600 may include the data modifier 1402 receiving a predetermined weight value array that includes one or more weight values for calculating one or more groups of output data based on one or more groups of input data. Method 1600 may further include the data modifier 1402 receiving connection data that include one or more connection values.

At block 1604, method 1600 may include the data modifier 1402 modifying the weight values based on the connection data. In some aspects, the modifying may further include sub-processes or sub-operations including deleting at least one weight values that correspond to the connection values that are zero when the data modifier 1402 operates in the pruning mode. In some other aspects, the modifying may include adding one or more zero values to the predetermined weight value array based on the connection values when the data modifier 1402 operates in the compensation mode.

In a specific example, the data modifier 1402 may receive a predetermined weight value array including weight values (0.5, 0, 0, 0.8, 0.9, 0.4) and an array of connection values (1, 0, 0, 1, 1, 1). Since the second and the third connection values are zeroes, the data modifier 1402 may be configured to delete the corresponding weight values from the predetermined weight value array. That is, the second and the third weight values in the predetermined weight value array. The modified weight value array may be stored as an array including elements (0.5, 0.8, 0.9, 0.4).

In another example, the data modifier 1402 may receive a predetermined weight value array including elements (0.5, 0.8, 0.9, 0.4) and the same connection data including connection values (1, 0, 0, 1, 1, 1). Since the second and the third connection values are zeroes, the data modifier 1402 may be configured to add two elements of zero value to be the second and the third elements of the predetermined weight value array generating a modified weight value array including elements (0.5, 0, 0, 0.8, 0.9, 0.4).

At the block 1606, method 1600 may include the computing unit 1410 calculating the one or more groups of output data based on the modified weight values and the input data. In some aspects, the computing unit 1410 may include one or more multipliers configured to multiply the input data by the modified weight values to generate one or more weighted input data, one or more adders configured to add the one or more weighted input data to generate a total weighted value and add a bias value to the total weighted value to generate a biased value, and an activation processor configured to perform an activation function on the biased value to generate a group of output data.

It is understood that the specific order or hierarchy of steps in the processes disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged. Further, some steps may be combined or omitted. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.

The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. All structural and functional equivalents to the elements of the various aspects described herein that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed as a means plus function unless the element is expressly recited using the phrase “means for.”

Moreover, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from the context, the phrase “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, the phrase “X employs A or B” is satisfied by any of the following instances: X employs A; X employs B; or X employs both A and B. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from the context to be directed to a singular form.

Claims

1. An apparatus for modifying data for neural networks, comprising:

a data modifier configured to: receive one or more groups of input data, wherein the one or more groups of input data are stored as input elements in an input array and each of the input elements is identified by an input array index; receive a predetermined weight value array that includes one or more weight values for calculating one or more groups of output data based on the one or more groups of input data, wherein the one or more groups of output data are to be stored as output elements in an output array and each of the output elements is identified by an output array index; receive connection data that include one or more connection values, wherein each of the connection values corresponds to one of the input array indexes and one of the output array indexes and indicates whether one of the weight values in the predetermined weight value array is designated for calculating a group of the output data to be stored as the output element identified by the corresponding output array index based on a group of the input data stored as the input element identified by the corresponding input array index, and whether the weight value meets a predetermined condition; and modify the weight values and the input data based on the connection data; and
a computing unit configured to: receive the modified weight values and the modified input data from the data modifier; and calculate the one or more groups of output data based on the modified weight values and the modified input data.

2. The apparatus of claim 1, wherein the predetermined condition includes that the designated weight value is a non-zero number.

3. The apparatus of claim 1, wherein the predetermined condition includes that an absolute value of the designated weight value is less than or equal to a first threshold value.

4. The apparatus of claim 1, wherein the predetermined condition includes that an absolute value of the designated weight value is less than or equal to a second threshold value and greater than or equal to a third threshold value.

5. The apparatus of claim 1, wherein the data modifier is further configured to delete at least one weight values that correspond to the connection values that are zero.

6. The apparatus of claim 1, wherein the data modifier is further configured to add one or more zero values to the predetermined weight value array based on the connection values.

7. The apparatus of claim 1, wherein the data modifier is further configured to delete at least one groups of the input data that are stored as the input elements identified by the input array indexes corresponding to the connection values that are zero.

8. The apparatus of claim 1, wherein the data modifier is further configured to add one or more zero values to the input elements identified by the input array indexes corresponding to the connection values that are zero.

9. The apparatus of claim 1, wherein the computing unit further comprises:

one or more multipliers configured to multiply the modified input data by the modified weight values to generate one or more weighted input data.

10. The apparatus of claim 9, wherein the computing unit further comprises:

one or more adders configured to add the one or more weighted input data to generate a total weighted value.

11. The apparatus of claim 10, wherein the one or more adders are further configured to add a bias value to the total weighted value to generate a biased value.

12. The apparatus of claim 11, wherein the computing unit further comprises:

an activation processor configured to perform an activation function on the biased value to generate a group of the output data.

13. The apparatus of claim 1 further comprising:

a storage device configured to store the one or more groups of input data, the modified input data, the connection data, the modified weight values, instructions, and the calculated output data.

14. The apparatus of claim 1 further comprising:

an instruction cache configured to store instructions received from a central processing unit;
a controller unit configured to read the instructions from the instruction cache and decode the instructions;
an input data cache configured to store the modified input data;
a weight cache configured to store the modified weight values;
an output data cache configured to store the calculated output data; and
a direct memory access module configured to transmit and receive data from and to the storage device, the instruction cache, the controller unit, the input data cache, the weight cache, and the output data cache.

15. A method for modifying data for neural networks, comprising:

receiving one or more groups of input data, wherein the one or more groups of input data are stored as input elements in an input array and each of the input elements is identified by an input array index;
receiving a predetermined weight value array that includes one or more weight values for calculating one or more groups of output data based on the one or more groups of input data, wherein the one or more groups of output data are to be stored as output elements in an output array and each of the output elements is identified by an output array index;
receiving connection data that include one or more connection values, wherein each of the connection values corresponds to one of the input array indexes and one of the output array indexes and indicates whether one of the weight values in the predetermined weight value array is designated for calculating a group of the output data to be stored as the output element identified by the corresponding output array index based on a group of the input data stored as the input element identified by the corresponding input array index, and whether the weight value meets a predetermined condition; and
modifying the weight values and the input data based on the connection data; and
calculating the one or more groups of output data based on the modified weight values and the modified input data.

16. The method of claim 15, wherein the modifying further comprises deleting at least one weight values that correspond to the connection values that are zero.

17. The method of claim 15, wherein the modifying further comprises adding one or more zero values to the predetermined weight value array based on the connection values.

18. The method of claim 15, wherein the modifying further comprises deleting at least one groups of the input data that are stored as the input elements identified by the input array indexes corresponding to the connection values that are zero.

19. The method of claim 15, wherein the modifying further comprises adding one or more zero values to the input elements identified by the input array indexes corresponding to the connection values that are zero.

20. The method of claim 15, wherein the calculating further comprises:

multiplying the modified input data by the modified input data to generate one or more weighted input data;
adding the one or more weighted input data to generate a total weighted value;
adding a bias value to the total weighted value to generate a biased value; and
performing an activation function on the biased value to generate a group of the output data.
Patent History
Publication number: 20180260711
Type: Application
Filed: May 9, 2018
Publication Date: Sep 13, 2018
Inventors: Shijin Zhang (Beijing), Qi Guo (Beijing), Yunji Chen (Beijing), Tianshi Chen (Beijing)
Application Number: 15/975,083
Classifications
International Classification: G06N 3/08 (20060101); G06N 3/04 (20060101); G06K 9/62 (20060101); G06F 17/15 (20060101); G06F 17/16 (20060101);