I want to compare two or more entropy measures. I found many research paper compared them with the help of their performance w.r.t. linguistic variables by taking a numerical example. But the performance changes with values, so we can't generalize.
Why would you want to compare two entropy measures? I assume that most, if not all, entropy measures should yield similar results, though at different scaled values. My experience, for example, suggests that Tsallis entropy will yield a higher value than Shannon entropy. I might be very wrong, though.
To compare entropy measures, you can generalize with Rényi entropy, by appropriately changing the Rényi's exponent according to the type of data you got. The higher the Rényi exponent, the most probable value of entropy you obtain for the most common probability in your set.
Even it my paper refers to brain function and multifractals, the concept is easily generalizable to many probability distributions, in countless different fields.
Preprint Multifractal exponents and Rénji entropy: a clue for brain function?