The usual way to obtain neural networks that recognize given data is incremental training by backpropagation. But there are much better alternatives. The following is taken from section 3 of the article ''Equivalence of Perceptrons and Polyhedrons'' that I just published in ResearchGate.

BEGIN EXCERPT

Summing up, polyhedrons and perceptron neural networks are equivalent. P=PNN. So what?

Perceptron networks are often used for pattern recognition. We prefer to talk a about data recognition. The data are finite subsets of $\R^m$. A way to recognize data, much more efficient and controllable than backpropagation or support vector machines, is to construct polyhedrons adapted to the data, including specification of distances to the hyperplanes f_i(x)=0 known as ''decision boundaries''. And then convert from polyhedrons to perceptron networks. These networks recognize the data perfectly. The network are built with as much room for generalization as the data could possibly allow.

As a consequence the whole learning paradigm disappears. There is no more learning. Networks are just gestated and, as in some myths, born with knowledge. If it is the case that streams of new data keep coming, permanent online gestation will keep the network updated. These developments could also lead to other types of neural networks, quite more informative than perceptrons. It remains to explain how to obtain and combine hyperplanes that separate given data in a convenient way. There are several efficient ways to calculate these hyperplanes.

END EXCERPT

More Daniel Crespin's questions See All
Similar questions and discussions