more number of neurons leads to more approximation at certain threshold. Generally, for approximation in supervised learning, at least you need number of neurons more/equal then number of features in single input. For lossy compression in unsupervised learning you need at least 50%. I have tested these cases, if you need papers let me know.
Increasing hidden layers will affect the classification accuracy. But, consider that there is a limit to these hidden layers, i.e., After that no more improvement and this may result in overfitting.
@Eiad Almekhlafi as im increasing the no of hidden layers the KPI are decreasing but after some fix Hidden layers these are not decreasing so it i better option to go with that fix hidden layers. ?
A hidden layer in an artificial neural network is a layer in between input layers and output layers, where artificial neurons take in a set of weighted inputs and produce an output through an activation function.
Hidden layer(s) is the secret sauce of your network. They allow you to model complex data thanks to their nodes/neurons. They are “hidden” because the true values of their nodes are unknown in the training dataset. In fact, we only know the input and output. Please consider these links:
Article Study of ANN with variable Inputs and Hidden Layers
Moiz Qureshi, what does adding higher order coefficients do for polynomial fitting? Essentially the same thing happens for NN, but a NN can fit more complex functions than polynomial fitting can.
On philosophical grounds I would avoid any opaque deployment of AI for any high risk applications (e.g., think lives on the line). In low risk applications I would ask: is deploying this not well understood AI better than doing nothing? And I would ask: can I live with the ramifications if the AI misses the mark completely?
More generally, if your AI is performing well and if you can answer yes to, "Is the function being fit by the machine learning algorithm causal (the law of identity applied to action)?" and yes to, "Is it informative?", then you know it will always work under the assumed conditions you meant for it. If you can answer no to any of those questions, then you know you're dealing with spurious correlations and your AI performance is artificial. If you cannot answer yes or no, then your AI is rolling the dice between coincidently stumbling onto a causal fit (which is the hope), but could just as easily stumble onto spurious correlations in the data (pretty much everyone evades this possibility). Fisher himself couldn't figure out how to avoid spurious correlations in data for certain because just from the nature of collecting data it introduces the opportunity for bias in the collected data.
IMO applying the concept of "secret sauce" to something nobody understands is the hope that the machine can become oracles to truths inaccessible to the human mind. When in reality (at least for today) opaque ML algorithms are just curve fitting---only for opaque AI you simply have no idea what the curve is and/or means. Employing this line of thinking is reality evading; it's like saying, "just let the machines worry about reality, I don't care to know".
ML algorithms have there place in high risk applications. They can filter out non-causal and/or non-informative features, leaving the scientist with exponentially less work to do.
i agree with Sir Eiad Almekhlafi. In order to improve classification accuracy, it is necessary to add more layers of complexity. It's important to keep in mind, however, that there is a limit to these hidden layers; after that, there is no more improvement, and this may result in overfitting.