There are many to be specific, like the network architecture, the selection of the technique for the data in hand, hyper parameters, weight initialization, loss function, and optimization method are a few to name.
There are many to be specific, like the network architecture, the selection of the technique for the data in hand, hyper parameters, weight initialization, loss function, and optimization method are a few to name.
There are different parameters that need to be optimized such as activation function, training algorithm, number of nodes and hidden layers, number of epochs, the proportion of the training and testing samples... .
All of these factors can deeply influence the performance of ANNs.
When the training phase is completed, you can generate a C source code file that uses scalar variables instead of arrays for storing the activation and output, and you can use number literals instead of accessing variables. Unroll loops, and use macros rather than function calls. Replace the sigmoidal function (or whatever activation function) by a piecewise linear approximation.
Another option might be to use a computer algebra system to treat the whole network with all layers as a function that maps an input vector to an output vector, and let it simplify the formula. But I'm unsure about it. Maybe it's just phantasy.
The performance depends on the type of neural networks used.
For a multilayer neural network and for the same data, the quality depends on the number of hidden layers, the number of neurons in a layer and the error minimization method chosen. The latter usually depends on some parameters that must also be adjusted according to the problem considered.