about.jpg
Conference Paper

Synthetic to Real Human Avatar Translation via One Shot Pretrained GAN Inversion

By
Tamam M.
Al-Atabany W.

This paper tackles the problem of generating pho-torealstic images of synthetically rendered human avatar faces from computer graphics engines, our approach leverages the high capabilities of generative models as StyleGAN that can generate high quality human faces that are hard to distinguish from real human faces images. We present a framework that effectively bridges the gap between synthetic and real domain through Single shot GAN inversion that maps the synthetic image into the real latent space of StyleGAN. Benchmarks and Quantitative results show that our method demonstrate significant improvements in preserving high fidelity facial features and identity, the proposed system sets a new standard for paving the way for more sophisticated and practical high fidelity photorealstic rendered images. © 2024 IEEE.