This is, admittedly, a bit of a fuzzy question related to neural networks. However, I have engaged in recent discussion on the matter of whether the ‘loss landscape’ (sometime referred to as a ‘loss surface’) is altered when dropout is used. There seems to be a divide among the researchers polled and I am curious to hear your thoughts.