In statistical pattern recognition, where we can deal, in principle, with high dimensional feature spaces, we have encountered "The Curse of Dimensionality." Basically, this describes the phenomenon that whereas, one can add more and more features (dimensions) to increase performance; the number of measurements required for robust recognition goes up exponentially with the number of features (dimensions).

The typical analytical way of dealing with this is to try to reduce the number of dimensions using, for example, a Karhunen-Loeve (K-L) transform and selecting the best basis vectors to form a reduced dimensionality. But, there is often a performance loss.

Another method, which I have used to great advantage, involves using a hierarchical structure of lower dimensionality pattern spaces, and then using their even lower-dimensional outputs as features to be passed up the hierarchical processing chain. In this way, we could process 12-dimensional features in a step-wise process of only 4 dimensions at a time. The whole synergistic system was extremely robust and of very high performance.

More Antonio Lucero's questions See All
Similar questions and discussions