Denker, J., Schwartz, D., Wittner, B., Solla, S., Howard, R., Jackel, L., & Hopfield, J. (1987). Large automatic learning, rule extraction, and generalization. Complex systems, 1(5), 877-922.
Śmieja, F. J. (1993). Neural network constructive algorithms: Trading generalization for learning efficiency?. Circuits, Systems and Signal Processing, 12(2), 331-374.
Denker, J., Schwartz, D., Wittner, B., Solla, S., Howard, R., Jackel, L., & Hopfield, J. (1987). Large automatic learning, rule extraction, and generalization. Complex systems, 1(5), 877-922.
Śmieja, F. J. (1993). Neural network constructive algorithms: Trading generalization for learning efficiency?. Circuits, Systems and Signal Processing, 12(2), 331-374.
As we know, Perceptron is the simplest form of a neural network as follows,
1. consists of a single neuron with adjustable synaptic weights and bias
2. performs linearly-separable pattern classification with only two
classes
As per definition, you can not generalize hidden layer using Single-Layer Perceptron. However, Multilayer Perceptron (MLP) can easily generalize as many as hidden layers. Once MLP comes, deep learning is booming.