Some algorithms are clearly fixed-point based. For instance, those algorithms sorting lists or simplifying expressions.

To illustrate this topic, let A denote a sorting algorithm that works by transposing two list-elements that A finds in wrong order. Indeed, if L is an ordered list, then A does nothing on L, that is to say, A(L) = L and L is a fixed-point for A. Otherwise, A is iterated until it the list becomes a fixed point.

The advantage of this algorithm consists of working as a test for its own result, that is to say, that L is a fixed-point, means that A cannot find any non-ordered element-pair in L. Thus, A is also a test for the output fitness. In other words, A is a self-testing algorithm.

After these considerations, any programming paradigm, based on fixed-point algorithms, will be bug-free ones. This is why, I think that it is worth to create such a paradigm.

Perhaps not every algorithm can be performed in this way. Nevertheless, there are no pure paradigms. For instance, Haskell programming language is said to be pure functional with no side-effects. This is not true. In this paradigm, IO-actions like IO-references are not pure and have side-effects. However, separating both kind of functions is compulsory; hence pure and non-pure functions live in separate rooms. In any case, the language encourages to use pure functions as far as possible.

More Juan-Esteban Palomar Tarancon's questions See All
Similar questions and discussions