For frequentists, probability is a feature of the physical world, particulatily of processes (technical term: random experiment). It can't be measured, but estimated by relative frequencies that a process has a particular outcome. If there is no process involved, there is no probability involved.
For Bayesians, probability is a state of our mind, expressing how credible we rate a set of exhaustive possibilities relative to each other. Data / observations are used to calibrate our rating (via Bayes' theorem). It does not matter for Bayesians if one considers the probability of an outcome of a random experiment or the probability of some state, or a theory, model or hypothesis. A Bayesian could assign a probability that the decimal digit on position 10^100 of Pi is 2, say. A frequentist can only say that this digit is unknown.
This is, I think, the key difference. Having this in mind you will easily find lots of resources in the web that explain this further.
For frequentists, probability is a feature of the physical world, particulatily of processes (technical term: random experiment). It can't be measured, but estimated by relative frequencies that a process has a particular outcome. If there is no process involved, there is no probability involved.
For Bayesians, probability is a state of our mind, expressing how credible we rate a set of exhaustive possibilities relative to each other. Data / observations are used to calibrate our rating (via Bayes' theorem). It does not matter for Bayesians if one considers the probability of an outcome of a random experiment or the probability of some state, or a theory, model or hypothesis. A Bayesian could assign a probability that the decimal digit on position 10^100 of Pi is 2, say. A frequentist can only say that this digit is unknown.
This is, I think, the key difference. Having this in mind you will easily find lots of resources in the web that explain this further.
" In short, Bayesians put probability distributions on everything (hypotheses and data), while frequentists put probability distributions on (random, repeatable, experimental) data given a hypothesis."
Naïve Bayes classifier modeling is advantageous not only the data frequency statistics but also this approach is efficient for large data metrics. It scales linearly with the number of samples. This naïve Bayes methods based on applying Bayes' theorem.
This method is based not only on the complexity of the model but also the likelihood. Furthermore, it picks the simplest model in order to explain the observed data. It is also interesting to note that naïve Bayes classifier model is devoid of overfitting. This Bayesian classifier provides fast process of a huge numbers of data and it is tolerant of random noise.
Bayesian concepts show a great success across neumerous disciplines including drug design and discovery. Through Bayesian ideas, the model parameters can be constrained by observed sample that can be estimated in a controlled fashion. Bayesian modeling is advantageous not only to obtain the data frequency statistics but also this approach is efficient for large data metrics. Laplacian-modified Bayesian analysis combined with extended connectivity fingerprints is especially useful for high-throughput data analysis because it is fast, is easily automated, and scales linearly with the number of samples.
References:
[1] Zhang H, Kang YL, Zhu YY et al. Novel naïve Bayes classification models for predicting the chemical Ames mutagenicity. Toxicol. In Vitro. 41, 56-63 (2017).
[2] Amin SA, Adhikari N, Gayen S, Jha T. An integrated ligand-based modelling approach to explore the structure-property relationships of influenza endonuclease inhibitors. Struct. Chem. 28, 1663-1678 (2017).
[3] Amin SA, Adhikari N, Gayen S, Jha T. First report on the structural exploration and prediction of new BPTES analogs as glutaminase inhibitors. J. Mol. Struct. 1143 49-64 (2017).
[4] David R, Mathew H. Extended-connectivity fingerprints. J. Chem. Inf. Model. 50, 742-754 (2010).
[5] Liu LL, Lu J, Lu Y et al. Novel Bayesian classification models for predicting compounds blocking hERG potassium channels. Acta Pharmacol. Sin. 35, 1093-1102 (2014).
Bayesian inference is a modeling approach based upon the combination of information from data (likelihood) with previous known knowledge (prior), which results in an update of the current state of knowledge about some process under investigation (posterior). Bayesian inference has the following advantages: (i) flexibility to create hierarchical models and include several covariates on the model without leading to over-fitting problems, (ii) allows integrate historical prior knowledge on the analysis, (iii) less prone to overfitting due to the shrinkage properties of the estimates due to prior functional assumptions, (iv) allows direct computation of probabilities using posterior probability values, (v) offers as a direct result of the analysis probabilistic credible intervals grounded on the axioms of probability theory, (vi) flexibility for assumptions using any probability distribution and truncated versions of them.
if the reply is useful, please recommend for others