Feature representation learning is not intended for feature selection but to make a better presentation of good features. Even by using representation learning, the bad features are still bad or even worse.
Google and Baidu appled deep learning technology to vision, natural images such as animals, plants and landscapes, and also detect objects.
This is not a learning technique, but a method of artificial intelligence.
Deep learning is a biologically-inspired artificial brain that mechanizes certain cognitive tasks to allow computers to read, see, and answer questions the way humans do.
Massive computational power machines can recognize objects and translate speech in real time. Google used this technology for Natural flora & fauna , later on towards other objects...Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. This is used in Artificial intelligence...
Thanks a lot, I understand how amazing deep learning is. However, my concern is in deep learning, do we still need to do feature selection to select which good features and bad features ? Let's the artificial intelligence works to recognize objects in image. but the image is very poor quality. Do you think the AI can assure to recognize it ?
Deep learning came from the fact that most of the effort in machine learning is to construct meaningful features. The idea behind representation learning is that you do not have to do any preprocessing, but give the data 'rough' (strings, pictures in matrix format...) and let the algorithm make a meaningful representation only based on data. Of course, this is often only possible when massive amounts of data are available...
Thanks a lot, so massive amounts of training data is the key to make deep learning effective. Is that any research mention how much the training data should be worked better ?
Before deep learning was invented around 2006, almost all neural networks researchers cling themselves to the new SVM bandwagon. Two reasons for this: (1) Adding more layers to MLP does not works (e.g., to compete with SVM) due to "diminishing error problem" (i.e., the error from the output layer that was back propagated to the inner layer getting small and small, hence in short the MLP infact does not learn"; (2) Three layers MLP was mathematically explained to be a unversal approximator -- in short there was no reason to add more layers;
Deep learning addresses the first problem (reason) by inventing pre-training (greedy layer wisely learn before move up to the higher layers) and fine-tuning (correct the unsupervisedly learn weights using few labeled data. Deep learning addresses the second reason by offering more abstraction complexity and hierarchical features learning through many of its layers.
The big different between ordinary machine learning technique and deep learning is that the ordinary machine learning usually use hand-crafted features, whereas in deep learning the features are unsupervisedly crafted by machine through its deep structure. Such new hierarchical features lead to the currently best recognition performance of deep learning in many cases.
The main shortcomings, and infact also the advantage, of deep learning is the requirement of big data to unsupervisedly crafting the features.
As with neural networks, I found that we are still lacking a comprehensive theoretical basis to validate the results of deep learning performances. Without sound theoretical basis, it seems the field currently felt more as magic instead of science. I believe we can hope to gain more lights in this aspect.
I summarize and review some of these deep learning algorithms. I hope this would help.