The information entering detection operations generally entails four, largely independent sources of information:
The target features in the signal, by which targets can be positively recognized as targets.
The non-target features in the signal, by which non-targets (noise) can be recognized as non-targets, and by which, in anomaly detection, targets can be recognized then as “not non-targets”—the double negative being a positive detection of sorts.
One's prior expectation (probability) of encountering a target during detection operations, which generally changes from one detection operation to another, or throughout a single operation (if the detection of one target increases one's expectation of facing another for instance).
The costs of making detection errors, of missing targets and prosecuting false alarms, which generally change from one detection operation to another.
The novice focuses on (1); the academic on (1) and (2); and the engineer with a genuinely risky real-world detection problem to solve focuses on all (1) to (4).
All of these sources of information are fused in the detector’s performance (probabilities of detection PD(s) and false alarm PFA(s)), and in the detector’s sensitivity setting (s, reducing false alarm rates when they are intolerably high).
The route to the information fusion is made explicit when adjusting the sensitivity setting s to minimize the expected costs C of operations:
C = (1-PD(s)) * p * C_miss + (1-p) * PFA(s) * C_false
C = expected cost of operations
C_miss = cost of missing a target
C_false = cost of prosecuting a false alarm
p = the prior probability of facing a target
PD = probability of detection given that 1 target is present in signal
PFA = probability of falsely registering a detection given that no target is present in the signal.
s = sensitivity setting of the detector
I think this cost minimization is attributed to Neyman-Pearson.