
Official Implementation of our pSp paper for both training and evaluation. Here we show results of pSp on StyleGAN inversion, multi-modal conditional image synthesis, facial frontalization, inpainting and super-resolution. The proposed pixel2style2pixel framework can be used to solve a wide variety of image-to-image translation tasks.


We show that solving translation tasks through StyleGAN significantly simplifies the training process, as no adversary is required, has better supportįor solving tasks without pixel-to-pixel correspondence, and inherently supports multi-modal synthesis via the resampling of styles.įinally, we demonstrate the potential of our framework on a variety of facial image-to-image translation tasks, even when compared to state-of-the-art solutions designed specifically for a single task, and further show that it can be extended beyond the human facial domain. Tasks even when the input image is not represented in the StyleGAN domain. By deviating from the standard "invert first, edit later" methodology used with previous StyleGAN encoders, our approach can handle a variety of Next, we propose utilizing our encoder to directly solve image-to-image translation tasks, defining them as encoding problems from some input domain into the We first show that our encoder can directly embed real images into W+, with no additional optimization. Our pSp framework is based on a novel encoder network that directly generates a series of style vectors which are fed into a pretrained StyleGAN generator,įorming the extended W+ latent space. We present a generic image-to-image translation framework, pixel2style2pixel (pSp). Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation
