Skip to main content
Fig. 6 | Insights into Imaging

Fig. 6

From: Enhancing cancer differentiation with synthetic MRI examinations via generative models: a systematic review

Fig. 6

Generative architectures for image-to-image translation. a The pix2pix is an extension of the conditional GAN architecture that provides control over the generated image. The U-net model generator translates images from one domain to another and through skip connections the low-level features are shared. The discriminator judges whether a patch of an image is real or synthetic instead of judging the whole image, while the modified loss function allows the generated image to be plausible in the content of the target domain. b CycleGAN is designed specifically to perform image-to-image translation on unpaired sets of images. The architecture uses two generators and two discriminators. The two generators are often variations of autoencoders where they take as input an image and output an image as output; the discriminator, however, takes as input an image and outputs one single value. In the case of CycleGAN, a generator gets further feedback from the other generator. This feedback confirms whether an image generated by a generator is cycle consistent, meaning that applying successively both generators on an image should produce a similar image. c In the MUNIT architecture, the image representation is decomposed into a content code and style code through the respective encoders. The content code and style code is recombined to translate an image to the target domain. By sampling different style codes the model is capable of producing diverse and multimodal outputs

Back to article page