Bayesian statistics is generally used to refer to an approach to statistical inference in contrast to "classical" (frequentist) approaches. The general idea is the use of probability distributions to represent uncertainty (the "prior") and construct a model by adjusting the model by applying the prior distribution and a likelihood function to new information. Classical/frequentist inference relies on assumptions about the distribution of some variable or variables of interest, subsequent measurement/observations, and then compares the measurement/observations to the assumed distribution to determine the probability that the results were due to chance. Bayesian inference doesn't make this assumption, but tests it. The prior distributions tell us what we should expect, but observations/measurements are compared against these expectations and used to update knowledge about the variables of interest.
In computational neuroscience, Bayesian methods are often used in the same way we find them used in machine learning/computational intelligence. In fact, Bayesian statistics are all about "learning". We have some information, we get more, and we use the new information as well as the prior information to (often iteratively) to "learn". Thus we can use Bayesian statistics to for everything from creating a model of neural codes by adaptively modelling spike trains as well as build models of cognitive processes or neural population adaptation. You may wish to check out
Doya, K. (Ed.). (2007). Bayesian brain: Probabilistic approaches to neural coding. MIT Press.
Bayesian statistics is generally used to refer to an approach to statistical inference in contrast to "classical" (frequentist) approaches. The general idea is the use of probability distributions to represent uncertainty (the "prior") and construct a model by adjusting the model by applying the prior distribution and a likelihood function to new information. Classical/frequentist inference relies on assumptions about the distribution of some variable or variables of interest, subsequent measurement/observations, and then compares the measurement/observations to the assumed distribution to determine the probability that the results were due to chance. Bayesian inference doesn't make this assumption, but tests it. The prior distributions tell us what we should expect, but observations/measurements are compared against these expectations and used to update knowledge about the variables of interest.
In computational neuroscience, Bayesian methods are often used in the same way we find them used in machine learning/computational intelligence. In fact, Bayesian statistics are all about "learning". We have some information, we get more, and we use the new information as well as the prior information to (often iteratively) to "learn". Thus we can use Bayesian statistics to for everything from creating a model of neural codes by adaptively modelling spike trains as well as build models of cognitive processes or neural population adaptation. You may wish to check out
Doya, K. (Ed.). (2007). Bayesian brain: Probabilistic approaches to neural coding. MIT Press.