As till now after the matched filter or correlator we get a projection of vector and now with with Maxmimum liklehood AND with MAP criteria how we arrive to decission ?
The output vector (denoted by v) from the correlator represents a received symbol with noise. Assume there are M possible transmitted symbols denoted by s_i, i = 1, 2, ... M. We want to make a decision about which one among the M symbols is most likely transmitted based on the correlator output signal v.
The optimum decision rule is to take s_k (among the M symbols) as the transmitted symbol if the conditional probability P(s_k | v) is maximum. This is called maximum a posteriori (MAP) decision, which guarantees minimum probability of error.
From the Bayes' theorem, if each of the M symbols has equal probability to be transmitted, that is, P(s_i)=1/M, then above MAP is equivalent to take s_k as the transmitted symbol if the conditional probability P(v | s_k) is maximum. This decision rule is called maximum-likely hood (ML) decision. Note that if the probabilities that the M symbols to be transmitted are not equal, ML is not same as MAP, though in reality equal (i.e. uniform) distribution is often assumed for the M transmitted symbols.
In AWGN channel, ML is equivalent to take s_k as the transmitted symbol if the distance between v and s_k is minimum, that is, to take the symbol s_k which is closest to v. This is called the minimum distance decision.
Nice to see your reply, and to have an opportunity to discuss these interesting topic.
Well, 1/0 estimation is just a binary case that is a special case of M-ary case with M=2, so it is clear that my above answer covers and is also applicable to binary case. As indicated in my previous answer, the maximum likelihood (ML) is equivalent to MAP if and only if all M symbols are equally distributed. In binary case (M=2), this means P(s=0) = P(s=1). If this is true, then ML (equivalent to MAP) yields minimum error probability. ML decision is based on the conditional probability P(v | s_k) which is also referred to as prioir probability.
On the other hand, if the M symbols are not equally distributed (no matter M equals 2 or not), then ML is not as same as MAP. In such case, MAP yields minimum error probability but ML does not.