Hi Everybody

I have a question. I am looking for your opinions and welcome to put any thoughts. In advance, I will appreciate your time and thanks for that. In general, generative adversarial network is able to generate super realistic images by starting from the noise vector. Some times our aim is producing new samples by learning the underlying data distribution. GANs are able to learn such distribution in an indirect fashion. However, most of the times we are trying to add condition to the GANs. In conditional GANs, we are starting from label, image, or etc. For sake of simplicity, let us think about the Super-Resolution task. GANs are starting from low-resolution image and through the learning; they will reach a high-resolution image. My first question is: 1) How GAN add extra information to the low-resolution image? In other words, how it is possible to go from less-information to high-information? Is it simply learn from the historical data? if yes, how it can generalize to the un-seen data? My second question is: 2) through the mentioned process, nature of the data can be potentially changed or not? For example, if our low-resolution cat image has two small eyes, is it possible to have three eyes in generated high-resolution image? In general, How can I conduct systematic experiments to show the conditions that GANs are falling to hallucination mode? Feel free to share your thoughts and opinions. Thanks.

More Vahid Ghodrati's questions See All
Similar questions and discussions