Definitely untrue. Lead by pioneers such as Geoffrey Hinton, neural networks have recently increased tremendously in their interest, both from the public as from the scientific community---indeed, the New York Times recently focused articles on his Deep Learning methods which have obtained unequalled results in "AI" problems: translation, text recognition, etc. etc.
The field of Neural Networks is very much resurrecting and will surely remain highly active for a number of years. Two factors may be key for this renewed interest: (1) much improved computational power, also due to GPU technology; and (2) a wealth of applications of such "soft" methods, in the mobile consumer gadget market, which require speech recognition, image recognition, complex inference, etc. but need not be 100% accurate.
Now the researchers going to develop the learning techniques based on natural behaviours for ANN training. Until now only PSO, ACO, CS, ABC are in the market only, While there are 18,000 kinds of insects are playing in this world. So I think it,s impossible for researchers to stop their further improvement.
the research in ANN itself is getting saturated but the evolution of ANN by hybridizing is getting more attention from researchers currently.....the good thing about ANN is that the formula equation for BP as already stabil......
No I dont think so, AS there is still lot many things are there in which ann can be used.. Agriculture area will have many new projects and not only optimization....
Definitely untrue. Lead by pioneers such as Geoffrey Hinton, neural networks have recently increased tremendously in their interest, both from the public as from the scientific community---indeed, the New York Times recently focused articles on his Deep Learning methods which have obtained unequalled results in "AI" problems: translation, text recognition, etc. etc.
The field of Neural Networks is very much resurrecting and will surely remain highly active for a number of years. Two factors may be key for this renewed interest: (1) much improved computational power, also due to GPU technology; and (2) a wealth of applications of such "soft" methods, in the mobile consumer gadget market, which require speech recognition, image recognition, complex inference, etc. but need not be 100% accurate.
It could be, in the areas where somebody found a direct method to solve the problem, but until all the problems have their own direct method, the heuristics will be the answer.
This is a complex question for which there are many answers.
1) There has been a clear tendency that many innovative developments that took place within artificial neural network research community have matured and their research is conducted under the umbrella of (statistical) machine learning. In such cases, the original bio-inspired starting point has faded away and methods are being developed in the framework of multivariate statistics, information theory, Bayesian modeling, etc.
For instance, the number of research papers that talk about once famous backpropagation has declined considerably as there are methods such as SVM (support vector machines) that have proven to be more efficient in similar tasks. On the other hand, there are classical methods such as the self-organizing map that are still very actively investigated and applied (see e.g. http://www.die.uchile.cl/wsom2012/ - you can also try to Google "self-organizing map" and then "backpropagation algorithm").
2) Bio-inspired computing is developing in waves. The increased understanding of brain is constantly feeding in new ideas to the computational modeling community. Sometimes these steps take a long time because stepping over disciplinary boundaries is not a simple process. People need to learn to speak each others' language to a sufficient degree before breakthoughs can start to take place. Conferences reflect the continuous presense of neuroscientists (see e.g. http://icann2012.org/).
3) One seemingly important development is related to the attempts in understanding the brain as a whole. For instance, research on cognitive architectures deals with this point of view. In such research, it is not sufficient to solve some straightforward classification or clustering problems but it is necessary to consider many aspects of cognitive processing including supervised, unsupervised as well as reinforcement learning, perception-action loop, multimodality, different kinds of memory, connections between non-symbolic and symbolic reasoning processes, etc.
These kinds of ideas were in mind, e.g., when we have been organizing the AKRR (Adaptive Knowledge Representation and Reasoning) conference series. The keynotes of AKRR 2008 give an idea of the scope of such developments ( http://research.ics.aalto.fi/events/AKRR08/keynote.shtml ).
It even seems that quite a lot of research has stuck to a "local minimum" where classical problems of classification or clustering are tried to be solved slightly better than before with marginal benefits gained. On the other hand, neural networks and machine learning researchers have started to expand their efforts into areas such as education (http://cogsys.blogspot.fi/2012/12/andrew-ng-online-revolution-education.html), sustainability (http://cogsys.blogspot.fi/2012/12/nips-2012-workshop-on-computational.html) and wellbeing (http://link.springer.com/chapter/10.1007%2F978-3-642-33266-1_58)
ANN are not saturated yet. Some AI experts have already twenty years ago argued that the ANN are saturated and even useless. Today the same people again successfully use ANN and explore it.
I'd also add that deep neural networks (deep belief nets - see the work e.g. e.g. Hinton et al, http://www.cs.toronto.edu/~hinton/ or Schmidhuber http://www.idsia.ch/~juergen/ ) are demonstrating new possibilities. And then there are issues of processing time-varying signals in real-time...perhaps with spiking networks... So no, I think there's more to do. And not just minimal optimisations that produce small improvements in test dataset either!
I think many of the answers stated here are confusing research IN ann with research USING ann.
Unlike the majority here, I believe that research in ANNs is not only saturated but already "finished" (meaning as finished as a science theme can be).
Theory and practice regarding many types of ANNs is already very very developed, and there is probably few space for any developments (at least any considerable ones).
HOWEVER, as a tool and applicable strategy, it will never be finished. It's like asking if using a hammer is saturated. Of course not, there are always problems where ANNs will have the best performance (the "No free lunch theorem" states precisely that!). Also, there will always be ways to improve and change ANNs for specific problems, but I consider that a fine-tuning of the algorithm to meet problem-specific requirements, and that's hardly research on ANNs (contrarily to what others have said).
But let me be clear on this: I'm not saying ANNs are useless at all. ANNs are very powerful, but, again, as any other algorithm (including SVMs), it strongly depends on the problem being studied.
I agree with Paul, in all bio-inspired approach is necessary to split in theory and practical. About theory, I believe that ANN has more field to investigations. We have been seen some studies about new discovered into neuroscience. Researchers point to changes in the way of neurons communication and new theories can emerge about it. The science does not know everything about neurons yet. This is my personal vision based on my recent reading.
Now, talk about ANN practical is more easy. This area has a whole field to researches. I believe that the intersection between computational intelligence techniques and various areas of science has the most atractive field of research today. Many results have been presented in the last years that point to this. So, ANN approaches are not saturated.
I agree with Paul and Moises. ANN is very useful for many engineering applications. It needs some fine tuning and very much dependent on the problems being studied.
I believe ANN research is not saturated for many reasons. First of all, the field is going through a recent burst of activity, as has been pointed out by others, thanks to breakthroughs in deep neural networks and hybrid methods. These breakthroughs are improving significantly the applicability of ANNs. Moreover I believe there are many breakthroughs waiting in the future. These future breakthroughs are related to many of the outstanding open problems in the field which are related to modularity, deep structures, scaling-up issues, the relationships between evolution, development, learning and adaptation, and many others. As long as brains are impressive computational systems, and as long as our ANNs pale in comparison, I believe the field of ANNs will never be saturated.
I think that NN history is very interesting in that it had its ups and downs throughout its lifetime. In each of these cycles the statement that the field reached its end was claimed. This points to the unpredictability of research, especially in a field that has so many aspects such as NN.
Something that we should ask ourselves is whether particular aspects of NN research is a worthy research topic or not depending on the stage it is at. Whether it is saturated or not depends on the specific problem that you are trying to tackle. If you ask most of the researchers in this forum on classical back propagation, most will agree that it is saturated. I still think that some useful research can be done on this area but one must be careful in choosing. If you look into deep neural networks as Leslie is pointing out most of us will agree that it is a very interesting area because it is relatively new.
One must be careful in stating preliminary results in a well researched area since a lot of ground has been covered. This means that one also has to cover that ground before stating results and be creative about presenting new ideas on a well researched topic. But this does not necessarily mean that the field is dead.
Certainly not. There is always pro's and con's on any domain in research. But it is sure that there are lot of engineering problems could be attempted in practice using appropriate selection of network and implementing. In my perception i see there are lot of research problems in this filed in various domains/industries. In theoretical way, one can think much on designing a new neural network which will improve the solution of a particular problem and will open up the new dimension of the solution.
Ten years ago, when I worked on Neural Networks, most of researchers in machine learning field beleive that Neural Networks was the past. Thanks to some people such G. Hiton and Y. Bengio (God bless them) and their interesting contributions to deep learning, today a lot of researchers in the machine learning filed beleive that Neural Netowks are the future.
Neural Network techniques value increasing day by day in scientific community and commercial application like stock market prediction, one predict games like cricket, footbal, Hockey, rugby, Basket Ball etc.
I used neural network in linguistic computations, predicting gender, and image processing, and I would like to go deeper. So, please don't speak about saturation.
I think for any truely scientific problem, there is no ending point for researching on it, because new problems always come along with improvements and progresses.
In fact, SVMs are just a particular way of training a neural network (a Perceptron), as indeed is Backpropagation (for multilayer Perceptrons). In both cases you can use the kernel trick to map to other spaces and higher dimensions, and you can also try combining with different functional preprocessing steps (actually part of the original Perceptron model) or using evolutionary or swarm techniques in combination with neural networks.
Also techniques like Adaboost, are really just producing a Perceptron-like weighted sum of products that are like another layer on top of underlying SVM or multilayer Perceptrons. And ELMs set the weights on one layer randomly!
In fact, logic gates, rule-based learners and classification trees can also be viewed as networks that map into the ANN space quite easily so anything any CPU or GPU can do can be regarded as within the capacity of an ANN.
Conversely, none of the algorithms I've mentioned other than the basic Hebbian idea of the Perceptron are particularly bioplausible. Other techniques like self-organizing maps are more bioplausbile, and heterogenous mixes of different types of neurons and layers and mechanism would also be more bioplausible. And then there's spiking neural networks...
Until we've mapped the human brain and know exactly how it works, there's plenty more scope to make neural networks better. Really we are at such a primitive state when compared to what the human brain/body can do that ANNs should be regarded as being right at the very beginning of their evolution.
A generational renewal in ANNs-based machine learning is actually underway powered by TDA (Topological Data Analysis)... At the end, much more invariance/robustness/compression (in terms of information representation and generalization) expected by enforcing filtrations.
A RS issue in relation with the continuous evolution of ANNs in a TDA context by following: https://www.researchgate.net/post/Which_is_the_universal_neural_network_implementation_like_feed_forward_CNN_Or_RNN_which_can_be_used_in_any_problem_domain
In my budding professional opinion, research in artificial neural networks is not coming to an end. There might be a research trend in artificial neural networks that might be plateauing or rather reaching cruise control, but the research as a whole is not coming to an end. There are research projects and topics that have limited use of artificial neural networks because of how to construct the data to make it usable for ANNs and then the output information generated from the ANNs have to communicated property and effectively. In this regard, the research and development is not finished. The research really has not began yet. Great question and I hope these responses help.