First of all, I'm considering the network architecture as a parameter.
I think one of the most important parameters is the depth of the network. This parameter (configuration) influences in the capability of the network to learn patterns. As shown in previous works and in the ImageNet challenge, deep networks achieve better results. For instance:
There exist many other parameters which influence on the network performance, e.g., skip-connections, number of the filters, activations, etc.. , however, the most significant improvement has been achieved by deeper networks.
As mentioned above, the network architecture is essentially its hyper-parameter. And as of my experience it is the most influential one.
Also, the next most influential one is the network depth. It was shown that the depth of a deep neural network is crucial considering the model expressiveness (i.e., its capability to approximate complicated nonlinear separating hypersurfaces).
e.g., see:
Article On the expressive power of deep neural networks
Furthermore, it was shown (see http://proceedings.mlr.press/v38/choromanska15.html ) that the depth is associated with the probability of the network to stack in the sharp minimum (which is bad for the generalization ability). Particularly, deeper networks demonstrate less tendency to fall into sharp minima, and its probability of converging to flat minimum is higher.
So - yes, the second most important hyper-parameter of a network is its depth.
There are a lot of other hyperparameters of course that may spoil a model :)