Motivated by "Improving Neural Networks with Dropout" thesis,
In page 2 of the paper, author says that "Dropout can also be interpreted as a way of regularizing a neural network by adding noise to its hidden units. This idea has previously been used in the context of Denoising Autoencoders where noise is added to the inputs of an autoencoder and the target is kept noise-free."
Even though, training denoising autoencoder and autoencoder with dropout at input layer remains the same, but during the test time, we multiply outgoing weights by (1 - dropout ratio) in case of dropout method and we don't do anything in case of denoising the autoencoder. Isn't that contrary to each other? How can we justify this? Please let me know if there is anything wrong in my argument.