All the methods that have proved to be useful for problems in the past will continue to be useful for similar problems in the future. For example, Art Samuel developed a method of machine learning for his checker-playing program in 1959, which learned to play well enough to beat its author. It even won a game against the Connecticut state champion. Variations of his method are still widely used today. In fact, the method that the IBM Watson system used to learn how to evaluate responses to Jeopardy! questions is very similar to the method that Samuel developed to learn how to evaluate moves in checkers.
On the other hand, no AI system today can learn languages as rapidly and accurately as a young child. That means there will have to be quite a few more breakthroughs before ML systems can reach a human level. For further discussion, see slides 10 to 16 of http://www.jfsowa.com/talks/soup_llr.pdf and the citations in slide 17. For a critique of "deep learning", see the URLs to articles by Gary Marcus.
According to my short experience on this field, nowadays I see that the use of SVM or shallow neural networks still continuous for less complex problems. Apart of deep learning, which I think it will be trend for some years, kernel selection in SVM according to the input data is a hot topic today. About the future,who knows?:)
All the methods that have proved to be useful for problems in the past will continue to be useful for similar problems in the future. For example, Art Samuel developed a method of machine learning for his checker-playing program in 1959, which learned to play well enough to beat its author. It even won a game against the Connecticut state champion. Variations of his method are still widely used today. In fact, the method that the IBM Watson system used to learn how to evaluate responses to Jeopardy! questions is very similar to the method that Samuel developed to learn how to evaluate moves in checkers.
On the other hand, no AI system today can learn languages as rapidly and accurately as a young child. That means there will have to be quite a few more breakthroughs before ML systems can reach a human level. For further discussion, see slides 10 to 16 of http://www.jfsowa.com/talks/soup_llr.pdf and the citations in slide 17. For a critique of "deep learning", see the URLs to articles by Gary Marcus.
I would also say ELM, because it uses a linear model as a core optimisation. Currently linear model is the only thing which works with contemporary amounts of data and time constraints for producing results. Linear models benefit automatically from each release of new computational libraries or hardware tools (for instance, Matlab will use Xeon Phi for solving linear system automatically). What we need to find are smart algorithms built around linear models to apply them in all kinds of workloads: non-linear data, sparse data, noisy data, structured data like images, and so on...
I agree that ELM methods are much faster than the older methods of back propagation and SVM. It's also possible to combine multiple ELM networks for Hinton-style "deep learning". That combination might have the speed advantages of ELM with the multi-level advantages of Hinton's technique.
But the criticisms of deep learning by Marcus would still have to be addressed. For a summary, see http://www.newyorker.com/news/news-desk/is-deep-learning-a-revolution-in-artificial-intelligence . As I pointed out in my slides, you can't have a human level of learning until you have a human level of language understanding -- and probably vice versa: http://www.jfsowa.com/talks/soup_llr.pdf .