Require a huge number of data for training. This issue prohibits us to use these approaches on small datasets. To handle this problem, current studies are working on pruning approaches aiming at reducing the model complexity and, consequently, reducing the number of samples needed.
There are many Hyperparameters to be set by a human (i.e., number of layer, filters, filter shape, etc..). In other words, to define the network architecture. To address this problem, current works have proposed to use the genetic algorithms, where the individuals (from the genetic algorithm) looking for a suitable architecture.
Requires high-performance hardware. Current deep models are very deep and require a high computational cost.
Require a huge number of data for training. This issue prohibits us to use these approaches on small datasets. To handle this problem, current studies are working on pruning approaches aiming at reducing the model complexity and, consequently, reducing the number of samples needed.
There are many Hyperparameters to be set by a human (i.e., number of layer, filters, filter shape, etc..). In other words, to define the network architecture. To address this problem, current works have proposed to use the genetic algorithms, where the individuals (from the genetic algorithm) looking for a suitable architecture.
Requires high-performance hardware. Current deep models are very deep and require a high computational cost.
We can not address these as the problems rather challenges. If you really dive into the problems, then let me share my opinion. In practice, deep learning extract the traditional handcrafted features through hidden layers (what traditional model based approach do). But the challenging part is what type of feature it learns ? Are they similar to the statistical ones or are they maintain some constant characteristics or are they same always?
Artur Jordão already mentioned about the challenges. From his answer, the second one can be considered as a problem.
One of the biggest problems in DL is their carnivorous demand for data. You need to make sure that there is enough data or if not that you can generate synthetic data. Also there is the problem of the fading signal when using backpropagation. DLs prefer rectifiers as activation function, otherwise DL are not that different from shallow artificial neural networks.
I totally agree with the answers provided by previous colleagues, and just for adding some more details, I think that one of the most important problems of training very deep neural networks is the problem of "exploding" and "vanishing" gradients. These problems have been tackled in the literature by using normalized initialization (e.g. Xavier or He initialization), and specific neural networks architectures such as Long Short-Term Memory (LSTMs) or Residual Networks (ResNets) which help with this kind of problems and allow training very deep networks.
In addition, I consider that another important limitation is the computation capability of current computers. In order to train very deep architectures in a decent amount of time, sometimes powerful GPUs are needed. In deep reinforcement learning, for example, this becomes a critical issue.
With respect to the amount of data needed for training deep learning models, I agree with previous comments as deep NN are prone to overfitting due to the complexity of these models (very large number of parameters). Increasing the amount of training data (e.g. with data augmentation techniques) helps in reducing the variance, and thus prevents overfitting problems. In addition, it is always recommended to apply regularization techniques such as L2 regularization or dropout in order to reduce overfitting. In cases when a small training dataset is available, Transfer Learning techniques (e.g. fine tuning) may help in training the current model.
From my perspective, for different type of data, we need to fine tune the network architecture. If auto adjustment/ some human like mechanism (not taking about only brain thinking, how neuron science really works on, eg, NUMENTA), then the true AI with neural network is possible.