We have two basic aspects of Artificial Neural Networks; (a) Architecture and (b) Learning Strategy.
How far we can go in bringing innovations in the Architectures of Deep Convolutional Neural Networks ? Like, depth, width, and any other dimension such as number of channels, number of split-transform paths, etc.
This comes to my mind because, the research in the "Learning Strategies"of Deep neural networks is not going at that pace at which research in the "Architectures" of Deep Neural networks is going.