Reservoir computers and extreme learning machines are typical examples of random neural networks (RaNN). In both these architectures, only the output layer is trained. So, are there neural architectures that only train the input layer?
Yes, there are neural architectures that only train the input layer. These architectures are known as "Autoencoders." An autoencoder is a type of neural network that is trained to encode the input data into a lower-dimensional representation and then decode it back to reconstruct the original input data.
See
Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the Dimensionality of Data with Neural Networks. Science, 313(5786), 504-507. DOI: 10.1126/science.1127647
Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., & Manzagol, P. A. (2008). Extracting and Composing Robust Features with Denoising Autoencoders. In Proceedings of the 25th International Conference on Machine Learning (ICML), 1096-1103.
Thank you for your response. However, I want to know if there are randomized neural networks exist where only the input layer is trained, and the RaNNs can do classification or regression tasks. As per my understanding, one uses autoencoders mainly for feature extraction.
No. You cannot find such a network for supervised learning. It is true that you can think about it logically, but mathematically you cannot find proper solutions. This is because the mapping process will be started backwards from one hot keys codes or regression values, which usually pile up in the singularity of the matrix and therefore lead to terrible solutions. Therefore, forward tuning all layers or the last layer at least will lead to a better approximation and generalization. You can test this easily using ELM and find out yourself. I've tried this once before trying to reverse the neural network and use it as a generative data model. But that did not work. then I read about it and found such peace of information.
You can use Conditional Random Fields which provide the effect of randomized neural networks on top of the existing neural architecture. Please see the CRF described in this paper. It is somewhat similar to your problem.
Article EEG-Based Emotion Classification in Financial Trading Using ...
In a neural network, the input layer is the only layer that can be trained independently. This architecture is known as a single-layer feedforward network, comprising just two layers: the input layer and the output layer. The input layer consists of m input neurons connected to the n output neurons. These connections are assigned weights wij and so on. The input layer neurons do not perform any processing; they simply pass the input signals to the output neurons. The computations take place in the output layer. The output layer computes the weighted sum of the inputs and applies an activation function to produce the output. The weights are adjusted during the training process to minimize the error between the predicted output and the actual output. This process is called backpropagation, where the error is propagated back through the network to update the weights. The goal of training a neural network is to find the optimal set of weights that can accurately predict the output for a given input. Once the network is prepared, it can be used to make predictions on new data. Neural networks have shown great success in a wide range of applications, including image recognition, natural language processing, and speech recognition.