Deep learning is getting popular in recent years, however, large datasets are needed. So, my question is there any benefit for deep learning if we have small datasets.
You can benefit from such techniques as transfer learning (also search for one-shot learning). The idea is if you find some similar task for which you have a large data base, you can train a large neural network for this similar task and then adopt it for your original task (initialization / re-training top layers / representation learning).
For example, for most of the image recognition tasks these kind of techniques are working now quite well. So, I'd say it a lot depends on your problem
Generally, you will not have benefits, because with small amount of data the Deep Learning methods tend to overfitting. In these cases you will make a model more complex than your data. Unless you come up with some regularization method that tailored to your problem.
You can benefit from such techniques as transfer learning (also search for one-shot learning). The idea is if you find some similar task for which you have a large data base, you can train a large neural network for this similar task and then adopt it for your original task (initialization / re-training top layers / representation learning).
For example, for most of the image recognition tasks these kind of techniques are working now quite well. So, I'd say it a lot depends on your problem
Generally, the design of deep networks is based on the fact that you have several iterations to feed its data hunger. Providing just a handful of them won't necessarily mean a bad approach on the problem, however, it won't be as effective as deep learning is intended to be. You could rather use models based on small data sets so as to still have a good result.
You can use a pre-trained network on some large dataset like ImageNet and fine-tune it on your dataset. Current deep learning frameworks allow you to do on-the-fly data augmentation to reduce the chances of over-fitting.