Abstract: Method of training a neural network, including receiving sets of digital attributes representing multidimensional regression at inputs; expanding the network by adding neurons and defining their activation functions and interconnections; any neuron of the neural network is capable of being directly connected to any other neuron; (i) when a training speed falls below predefined threshold, and accuracy does not improve, identifying a neuron with highest error value; (ii) adding a neuron directly between the identified neuron and a corresponding output; (iii) setting only a connection coefficient between the added neuron and the identified neuron to zero before it is modified by the training, while other coefficients of the added neuron are set the same as coefficients of the identified neuron, before they are modified. After at least one iteration, either (iv) finishing the training of the neural network or (v) continuing to train the network to reach a predefined depth.
Abstract: Optimization method with parallel computations, including performing multiple stages of calculating target function of P independent parameters using GPUs, wherein entire one-dimensional array of the calculated values of the target function of length ? j = 1 P ? W j that needs to be computed is divided into groups of size ? and calculated in parallel at L = ? j = 1 ? ? W j ? parameter points; a number of simultaneously calculated parameters ? in each group and a number of calculation points Wj of the target function at interval Dj for each j-th desired parameter in the group is selected based on a possible number of parallel calculations R=G·M·T, where G is number of GPUs, M is number of cores in each GPU, T is number of threads in each core; and outputting the calculated P parameters for the global extremum of the target function, wherein a full cycle of calculating points is carried out for consecutive iterations, defined as integer division ? L R ? , rounding up.