It is very important to understand the concepts of neural networks, backpropagation, and so on, to go deeper in the deep learning classifiers, since deep learning can be seen (in a coarse view) as neural networks with a larger number of hidden layers. Let's say that classical machine learning neural networks have 1-3 hidden layers, while deep learning approaches use more hidden layers.
It is very important to understand the concepts of neural networks, backpropagation, and so on, to go deeper in the deep learning classifiers, since deep learning can be seen (in a coarse view) as neural networks with a larger number of hidden layers. Let's say that classical machine learning neural networks have 1-3 hidden layers, while deep learning approaches use more hidden layers.
I'd recommend a standard text book on machine learning first, if you are new to classifiers. For deep learning, take a look at e.g. Geoffrey Hinton's work.
I recommend you this course, Machine learning in Coursera, taught by Andrew Ng in Stanford
This lecture is easy to follow and also you can learn how to implement machine learning algorithm in matlab because programming assignment is based on the matlab code.
I am also into Deep learning now, and i find this new area of neural network was really fascinating. You could search up cat paper by Google team, that uses millions of data for training with 16,000 machines.
In my opinion the easiest way to understand deep learning is to start from neural networks that I belief most of us know this well.
Before deep learning was invented around 2006, almost all neural networks researchers cling themselves to the new SVM bandwagon. Two reasons for this: (1) Adding more layers to MLP does not works (e.g., to compete with SVM) due to "diminishing error problem" (i.e., the error from the output layer that was back propagated to the inner layer getting small and small, hence in short the MLP infact does not learn"; (2) Three layers MLP was mathematically explained to be a unversal approximator -- in short there was no reason to add more layers;
Deep learning addresses the first problem (reason) by inventing pre-training (greedy layer wisely learn before move up to the higher layers) and fine-tuning (correct the unsupervisedly learn weights using few labeled data. Deep learning addresses the second reason by offering more abstraction complexity and hierarchical features learning through many of its layers.
The big different between ordinary machine learning technique and deep learning is that the ordinary machine learning usually use hand-crafted features, whereas in deep learning the features are unsupervisedly crafted by machine through its deep structure. Such new hierarchical features lead to the currently best recognition performance of deep learning in many cases.
The main shortcomings, and infact also the advantage, of deep learning is the requirement of big data to unsupervisedly crafting the features.
As with neural networks, I found that we are still lacking a comprehensive theoretical basis to validate the results of deep learning performances. Without sound theoretical basis, it seems the field currently felt more as magic instead of science. I believe we can hope to gain more lights in this aspect.
I summarize and review some of these deep learning algorithms. I hope this would help.
My suggession is to follow this course from Standford. It is very nice and applied. They do not bore you with all mathematical derivations, rather focusing on understanding. Here is the link:
Dear deep learning is basically a neural network implementation with large number of layers. if you want to start using matlab. study NN toolbox and also install matConvNet for CNN deep learning if your are working in Computer vision and images