By definition, the capacity of a communication channel is given by the maximum of the mutual information between the input (X) and output (Y) of the channel, where the maximization is with respect to the input distribution, that is C=sup_{p_X(x)}MI(X;Y).

From my understanding (please correct me if I'm wrong), when we have a noisy channel, such that some of the input symbols may be confused in the output of the channel, we can draw a confusability graph of such a channel where nodes are symbols and two nodes are connected if and only if they could be confused in the output.

If we had to communicate using messages made out of single symbols only, then the largest number of messages that could be sent over such a channel would be α(G), the size of the largest independent set of vertices in the graph (in this case Shannon capacity of the graph equals independence number of that graph α(G)).

Does this mean that for such a channel, the maximum mutual information of the input and output of the channel (channel capacity) is α(G), and it is achieved by sending the symbols of the largest independent set?

More Amirhossein Nouranizadeh's questions See All
Similar questions and discussions