Generative Adversarial Networks (GANs) has achieved good performance in image style transfer, e.g. transfer a landscape photograph to a painting style (Monet, Van Gogh), and vice versa [1].

Should we expect the same good performance for using GANs to tranfer a textureless 3D model image to a real-life photo style? If so, what is the general implementation procedure, and do I need to train the GAN model from scatch with a great amount of images from both domains (3D model and real-life photo)?

Reference

[1] Zhu, Jun-Yan, et al. "Unpaired image-to-image translation using cycle-consistent adversarial networks." Proceedings of the IEEE international conference on computer vision. 2017.

More Junjie Chen's questions See All
Similar questions and discussions