I'm learning how to use the Recurrent Neural Network model (RNN). I'm not entirely sure about the feed-forward procedure in RNN. It includes, for example, input, hidden state, and output. As far as I know, the hidden state is a type of multi-layer perception (MLP). However, in this case, a hidden state is derived from both current input and a previously hidden state. Unfortunately, "we can note that everyone reported the total number of memory cells, but no one specified the number of neurons inside each memory cell," which confuse me.

Second, I am confused by the RNN backpropagation procedure. I searched Google, and everyone only mentioned the generic steps (calculations of gradients) but no one conducted step-by-step backpropagation on example. I'm desperate for the entire RNN training process (two iterations is sufficient), even on a single layer with three to four memory cells.

Can anyone available concrete examples of the RNN model training process?

More Nafees Ahmad's questions See All
Similar questions and discussions