As far as I can tell, neural networks have a fixed number of neurons in the input layer.

If neural networks are used in a context like NLP, sentences or blocks of text of varying sizes are fed to a network. How is the varying input size reconciled with the fixed size of the input layer of the network? In other words, how is such a network made flexible enough to deal with an input that might be anywhere from one word to multiple pages of text?

If my assumption of a fixed number of input neurons is wrong and new input neurons are added to/removed from the network to match the input size I don't see how these can ever be trained.

I give the example of NLP, but lots of problems have an inherently unpredictable input size. I'm interested in the general approach for dealing with this.

For images, it's clear you can up/downsample to a fixed size, but, for text, this seems to be an impossible approach since adding/removing text changes the meaning of the original input.

More Fazla Rabbi Mashrur's questions See All
Similar questions and discussions