In the computation of Sample Entropy and Approximate Entropy, the distance between any two sequences is always taken as the maximum absolute difference between the scalar components (treating the sequences to be scalars). Why is the distance defined in this way and not in any other way (e.g. the Euclidean distance)? (A paper by F Takens, "Invariants related to Dimension and Entropy" might have the answer, but I am not able to find it anywhere.)

More Rohit Parasnis's questions See All
Similar questions and discussions