In Bayes approach individual detection threshold are determined for evaluating probability of detection and probability of false alarm while in NP probability of false alarm is fixed. Please explain?
With NP inference we set our Type I error (wrong by overinterpretation) at a fixed value, usually 5%. We protect ourselves from over-interpretations at fixed value, usually 5%. With Bayesian we state our prior probabilty, we use likelihood to update our prior probability. Bayesian makes sense if probability of detection and probability of false alarm are reported as your update of prior results to your result.
Bayes and NP inference differ in many ways. And so you can expect a diversity of replies to your query.
Bayes' theorem serves as the link between different partitioning. The two partition the same outcomes by A and B in opposite orders, to obtain the inverse probabilities. It shows how to use new evidence to update beliefs
Neyman-Pearson theorem considers algebraic manipulation of the ratio to see if there are key statistics in it related to the size of the ratio (i.e. whether a large statistic corresponds to a small ratio or to a large one)
As Mr David rightly said, it differs in many ways. You can take inference which relates to your event/s