13 October 2011 13 2K Report

Information theory, formulated by Claude Shannon, says that information reduces uncertainty. Diffusion of innovation theory, formulated by Everett Rogers, say that information increases uncertainty. How do we resolve this conflict in the relationship of information to uncertainty?

In his information theory, Claude Shannon defined "entropy" as a measure of uncertainty with respect to some variable or event. The greater the uncertainty, the greater the "Shannon entropy." Shannon proposed that information reduces uncertainty and therefore reduces entropy. A typical example of this principle is flipping a two-sided coin. When tossing a coin, we are uncertain about the two possible outcomes. By observing the outcome (say, heads), we gain information that reduces the uncertainty about this event.

The theory of diffusion of innovation, developed largely by Everett Rogers, is a study of how new ideas and technologies spread (or are adopted) by a population. Through his research, Rogers showed that innovation creates uncertainty, which becomes a barrier to the adoption of the innovation. For example, when teaching people who live in rural areas to boil their water before drinking it, this "innovation" is resisted because it may conflict with established cultural beliefs that boiled water makes people sick. The demand to boil water conflicts with cherished beliefs and is therefore resisted.

We can see that Shannon demonstrated that information reduces uncertainty, but Rogers demonstrated that information increases uncertainty. How do we resolve this apparent conflict? I will present a few thoughts about this and would appreciate your comments.

I would propose that Shannon and Rogers are considering two different aspects of information. Shannon's information is a signal (or indicator) with minimal or no meaning (one "bit" of information = low meaning-value); whereas Roger's information refers to multidimensional concepts with a great deal of meaning (multidimensional information = high meaning-value) . Assuming that a given unit of information can be placed on a continuum of meaning-value, then the greater the meaning embedded in a message, the greater the likelihood that the implied-meaning of the message-sender (boiled water = prevention of illness) will conflict with the perceived-meaning of the message-receiver (boiled water = cause of illness).

Gregory Bateson defines information as "the difference that makes a difference." Loet Leydesdorff defines knowledge as "the meaning that makes a difference." Perhaps, then, information with low-meaning value reduces uncertainty in a low-dimensional state space (such as binary signaling system); whereas information with high-meaning value increases uncertainty in a multi-dimensional state space (such as a cultural belief system). Ultimately, this begs the difference between information and knowledge, and how information from a sender is evaluated and assimilated into a receiver's knowledge-base.

Thoughts?

More Wayne Stelk's questions See All
Similar questions and discussions