Can the depth be controlled by the complexity of the object (e.g. faces, written characters, and the like) in a deep learning network for image processing?
So you want a piece of software that can estimate the complexity of the object?
You could train a network to do that, but if the depth of such a network should also depend on the complexity of the task (i.e. estimating complexity) then you're kind of stuck :-).
In order to ensure the best results for a deep learning network, you should try to organise it cybernetic settings and secondly to ensure the students' engagement. Please see some of my papers about Cybernetic model in e-Education.
Some ideas that I think could be worth a try for B&W images based on the use of Restricted Boltzmann Machines.
1) Since the data will be binary valued you can calculate the bit string entropy and draw thresholds based on entropy outputs of the stacked RBMs.
2) Calculate average density regions on the inputs and mark thresholds for the overall density of black pixels in the image.
3)If using chain codes compute the overall change in direction of the chain code as a measure of complexity.
These ideas contemplate additional work outside of the Network, are not definite solutions, but I think these two ideas might pave the way for other more refined ideas.
In general, you can include the current size of your network at a particular point as you are training as an additive factor to your overall "cost function". Usually, cost functions initially some representation of the overall residual error of your network. There are existing such cost functions which take into account your aggregate network error plus network size penalty. One such is the AIC (or AICC, Akaike information criterion). Your implementation may vary, and at some point becomes subjective based on what makes sense to your overall system, but this idea heads in the right direction.
choose to increase the model width instead of depth, because deeper models have only diminishing improvement or even degradation on accuracy.
I believe that there might be some theoretical lower bound that determine the minimum number of layers given a raw features, whereas increasing the number of layers will only give minimal advantages or even degradation. But I don't know if such lower bound exists.