1. Gaussian process (GP) directly captures the model uncertainty. As an example, in regression, GP directly gives you a distribution for the prediction value, rather than just one value as the prediction. This uncertainty is not directly captured in neural networks (see for example https://arxiv.org/abs/1506.02142)
2. When using GP, you are able to add prior knowledge and specifications about the shape of the model by selecting different kernel functions. For example, based on the answers to the following questions you may choose different priors. Is the model smooth? Is it sparse? Should it be able to change drastically? Should it be differentiable?
If you want to learn the basics of GP, I recommend this lecture by Nando de Freitas: https://www.youtube.com/watch?v=4vGiHC35j9s
A collection of GP models is available in GPstuff toolbox : http://research.cs.aalto.fi/pml/software/gpstuff/
For Gaussian process models; the best way to understand how to use these models for linear and nonlinear systems is KF and EKF respectively. In both cases the Kalman filtering conditions have to fit correctly to our problem. Then we have to determine the necessary parameters and the initial values of the filter in order to iterate through the estimates. This is a probabilistic based approach (with Gaussian distribution); these filters are normally used for decision making problems or to solve a problem like SLAM where the environment is mostly dynamic and sometimes stochastic.
Neural Networks; is an adaptive mechanism that enables computer to learn from its experiences. The environment here is mostly deterministic (The training models to be memorised are normally limited). The knowledge here is stored as synaptic weights between neurons (it is not a dynamic model of a system) in other words the knowledge is embedded in the entire network; it cannot be broken into individual pieces, and any change of a synaptic weight may lead to unpredictable results (While using probabilistic rules; the knowledge can be divided into individual rules and the user can see and understand the piece of knowledge applied by the system). Finally; some Neural Networks can learn with or without human intervention (we speak about Self-organizing neural networks). Neural Networks are mostly used for achieving classification; ex. pattern recognition problems.
This is my own point of view for both of them. You can find a lot of ideas about them on the websites.
1. Gaussian process (GP) directly captures the model uncertainty. As an example, in regression, GP directly gives you a distribution for the prediction value, rather than just one value as the prediction. This uncertainty is not directly captured in neural networks (see for example https://arxiv.org/abs/1506.02142)
2. When using GP, you are able to add prior knowledge and specifications about the shape of the model by selecting different kernel functions. For example, based on the answers to the following questions you may choose different priors. Is the model smooth? Is it sparse? Should it be able to change drastically? Should it be differentiable?
If you want to learn the basics of GP, I recommend this lecture by Nando de Freitas: https://www.youtube.com/watch?v=4vGiHC35j9s
A collection of GP models is available in GPstuff toolbox : http://research.cs.aalto.fi/pml/software/gpstuff/