In my research, I need to compare the neural networks I have built, consisting mainly of perceptron and normalization layers, with networks from other publications that have convolution, pulling, normalization and perceptron layers in terms of computational complexity. I have the ability to calculate the number of parameters a given neural network has on a given layer, but I don't know how I should compare it.

Should I take only convolution layers as the most taxing, or sum the number of parameters from all of layers?

How should I compare neural networks that have computationally stressful convolution layers with others that do not have them, but perform feature extraction in a different way?

More Mikołaj Płachta's questions See All
Similar questions and discussions