contrastive-unpaired-translation and DCLGAN
About contrastive-unpaired-translation
taesungp/contrastive-unpaired-translation
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
This project helps artists, designers, or marketers transform the style or characteristics of one set of images to another, without needing perfectly matched examples. For instance, you could take a collection of photos of Russian Blue cats and generate new images of them as 'Grumpy Cats'. You input two distinct collections of images, and it outputs new images that blend content from one with the style of the other.
About DCLGAN
JunlinHan/DCLGAN
Code for Dual Contrastive Learning for Unsupervised Image-to-Image Translation, NTIRE, CVPRW 2021, oral.
This tool helps researchers and visual effects artists transform images from one style to another without needing matching pairs of images. For instance, you can take a collection of photos of horses and a collection of photos of zebras, and the tool will generate realistic 'zebra' versions of your horses and 'horse' versions of your zebras. It's ideal for anyone working with visual data who needs to bridge visual gaps between distinct image domains.
Scores updated daily from GitHub, PyPI, and npm data. How scores work