I just read the paper, and from my first impressions, it is harder to implement and it does not show some neural nature in its intrinsic properties like spikes, activation. It is like an algorithm it divides sub regions on instance space and perform classification by the boundaries. However, it seems faster at least in the optimization perspective since it is capable of learning from smaller amount of data. As I see there is also no comment about overfitting or regularization possibilities of algorithm. In addition they give a solid structure of 3 layers and it is a question of its extension to more layers of NN. Therefore I personally prefer to use BackPropagation rather but it is a good contribution to learning of MLP