The idea of Bayesian Neural network is as primitive as the answer of failing student. You have a neural network model but no matter how hard you train it, there is always a residual error, so what to do? And student tells you, - than replace each scalar in the network by normally distributed random variable and tune expectations and variances to match the data.

Although this concept is failing miserably, we can find large group of scientists who keep pushing it into usage. I can easily provide the proof of these strong statements. The elementary stochastic system, which anyone can reproduce at home is a coin and dice. You pick two random inputs by rolling one die twice, let say they are 3 and 5 and flip the coin. In case of head you roll 3 dice, add outcomes, otherwise 5 dice. The sum of outcomes is your stochastic output. Simple, right? Now make few hundred records and try to obtain bimodal distribution by any of publicly available library designed to support BNN. The result will not be even remotely close to reality. But the solution is simple and know for at least 50 years. It is KNN. For each given input you find several similar records. Each output is considered as expectation of normal distribution, you assign variance from common sense, and you see this beautiful bimodal distribution very close the real. Called KDE, known for decades. Funny?

That is not all. Freely available library Tensorflow is capable to detect gaps in data and return your confidence interval, which becomes larger for sparse data. That is already mocking of the science. All you need to do to identify these gaps is to generate new inputs as evenly distributed points in the field of definition, find the distance to nearest dataset point for each, record it and make a new model, which tells you your training data density. Why to use Tensorflow, when it needs 50 lines of code and can be done by student for an hour.

I tested Tensorflow with coin and dice data. The returned result was compared to true distribution by Kramer von Mises criteria. The accuracy was 15%. KNN gives 85%. I made my own method, which is slight improvement of KNN, and improved it to 90%.

I never believe that scientists promoting BNN is not aware that this technology is fake. My question is what we can do about it? Let say I publish my research, I contact scientists promoting BNN directly, he ignores and keeps promoting his research. We all don't like when doctors prescribe us expensive drugs when regular drugs is a cure and when auto mechanics suggesting to replace parts that can work. Isn't that the same thing?

I will add the links to my published research exposing weakness of BNN for those who interested.

More Andrew Polar's questions See All
Similar questions and discussions