With a discrimination task I know how to work out whether performance for a particular participant is better than chance or not. This is just done by computing the observed accuracy with the expected frequency of correct responses based on chance (dependent on the number of response options in the task) with the number of observed correct responses.
However with a signal-detection task I am not sure what to do. How does one determine if, for instance, a d-prime of 0.04 indicates responding that is better than would be expected by chance or not? I don't think that one can use the normal binomial formula (as with a simple 2AFC discrimination task) because the number of given present and absent trials given is not necessarily equal in a detection tasks; the expected frequencies of a random responder would also presumably differ depending on their level of bias towards 'yes' or 'no'. So the whole thing is far more complicated. Presumably this is a fairly common issue with detection tasks-so there must be a formula somewhere to deal with it..