Recurrent Neural Networks can be trained to produce sequences of tokens given some input, as exemplified by recent results in machine translation and image captioning. The current approach to training them consists of maximizing the likelihood of each token in the sequence given the current (recurrent) state and the previous token.
to get the idea on the nature of recurrent neural networks, I recommend a rather old paper from 1990. It is about the simple recurrent network. The following website in Colorado has provided a link to it: