Neural network is an important tool for AI, artificial intelligence, but AI seems to be trapped in the quagmire of developping slowly only depend on neural network, Can neural network solve AI's main requirments? Or the viewpoint above is not true?
Yes, many applications of AI can be solved using ANN
Also, you can see the applications of AI covered by the International Journal of Distributed Artificial Intelligence (IJDAI) , a specialized AI journal
Currently, variants of neural networks known as 'deep learning models' are pushing the boundaries of what we believe that a machine can do. There are numerous types of deep learning models specialized for specific tasks. As time progresses there are more and more field and application specific models such as RNN for time series prediction and CNN for image recognition tasks. A recent article published in Nature shows how deep learning models that are specialized in tasks are being applied in earth systems science (Reichstein et al., 2019). For example video prediction models used for short term forecasting and language translation models used for dynamic time series modelling. But even they state that the next step is a hybrid model. In short, at this point in time, neural networks (and its derivatives/variants such as deep learning models) have the highest chance of solving not just AI's main requirements but also the requirements of other fields where the models are being applied. But to reach new levels of performance field specific knowledge must be integrated to newer more specialized models.
Reference
Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., & Carvalhais, N. (2019). Deep learning and process understanding for data-driven Earth system science. Nature, 566(7743), 195.
Neural networks from a historical perspective have been important to AI for the following reasons:
Since 1989 K Hornik proved that feedforward neural networks are universal approximators.
Much work was done on in the 90s in studying the VC dimension and PAC learning in neural networks
There is current work in Neural Association Models which seem promising and Neural Network Embedding are opening areas that were once only limited to symbolic processing
It is worth mentioning that most of the work done during the 80's focused on learning the training set and the capacity of the neural network as memory devices. This research has been largely replaced by the generalization capability that NNs have.
These theoretical developments laid a solid foundation for the neural network.
That said NNs have some limitations in:
Recurrence in neural networks (e.g. backpropagation through time) is limited. For deep architecture, the complexity is quite high to contemplate a long time frame and be deep at the same time. Also LSTM have additive and "forget gates" and therefore have better storage but still suffer from vanishing gradients eventually. This points to the larger problem of catastrophic forgetting of most NN architectures (ART is an exception but grows very quickly)
Data structures such as queues and stacks is very limited (either through limited recurrence or as shift registers). Also, due to the shift in importance to generalization these efforts have been stagnant.
These limitations constrain neural network implementations as general computing elements that can be implemented in isolation(i.e. they need additional traditional programming to complete complex tasks and are used mainly as pattern recognition devices).
With these two limitations I see difficult that NNs can satisfy all of AI's requirements, especially for symbolic processing that needs to contemplate time evolution and large memory storage. Also, NNs efficiency on graph data is rather limited.
Indubitably, current achievements in the area of connectionism can have a great bearing on the endeavors focused on Artificial Intelligence. The connectionist approach to cognition whereby multiple connections between brain nodes form a massive interactive network involving many simultaneous processes operating in parallel have brought about very interesting outcomes that are important to human thoughts and actions. Consequently, as you have astutely observed, Neural networks are perhaps the answer to the needs of Artificial Intelligence.
The deep convolutional network, auto-encoders, stack Boltzmann, RBFNN, and many others along with Reinforcement learning has already attained accuracy up to 99% in many tasks including Image classification, weather prediction etc. For me, these are strong enough to predict future.
The answer certainly depends on what you mean by AI's main requirements. For me current AI means solving task that require some human cognitive effort that cannot be trivially performed by machines (e.g., summing large numbers or finding a shortest path in a graph surely requires a lot of cognitive effort, but it easy when approached allegorically). In this sense, I believe NNs are still lot to say. I certainly would not say that the development is trapped. We only have the luck that we experienced an unprecedented leap forward in many tasks that can be considered to be AI brought NNs, this why it may seem that the current development is trapped.
The neural network are essential in solving many artificial intelligence problems, for example, the pattern recognition , speech recognition, detection diseases, human activity.. .. It is capable of mimic the brain of humans so they are able to carry out many tasks by human beings, especially making decisions. Then NN is very important method in AI
Yes, many applications of AI can be solved using ANN
Also, you can see the applications of AI covered by the International Journal of Distributed Artificial Intelligence (IJDAI) , a specialized AI journal