Presentation is loading. Please wait.

Presentation is loading. Please wait.

Presenter: Hajar Emami

Similar presentations


Presentation on theme: "Presenter: Hajar Emami"— Presentation transcript:

1 Presenter: Hajar Emami
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks Presenter: Hajar Emami

2 Agenda Background Motivation Model Experimental results Discussion
Image-to-image translation: learn the mapping between an input image and an output image Training set of aligned image pairs Motivation For many tasks, paired data is not available. Model Experimental results Discussion

3 Image to-image translation
converting an image from one representation, x, to another, y, grayscale to color image to semantic labels, ….. obtaining paired data can be difficult and expensive. English->French

4 Unpaired image to-image translation
Goal: learn a mapping G : X → Y, Inverse mapping F : Y → X G and F should be inverses of each other Combine a cycle consistency loss to enforce F(G(x)) ≈ x and G(F(y)) ≈ y, with adversarial losses Translate from one domain to the other and back again: arrive at start Forward cycle-consistency loss: x → G(x) → F(G(x)) ≈ x, y → F(y) → G(F(y)) ≈ y

5 Formulation Adversarial losses: matching the distribution of generated images to the target domain Cycle consistency losses: prevent the learned mappings G and F from contradicting each other Adversarial losses: Cycle Consistency Loss: Full Objective:

6 Implementation Network Architecture: two stride-2 convolutions
several residual blocks two fractionally strided convolutions French->English

7 Analysis of the loss function
Removing the GAN loss or the cycle-consistency : degrades results both terms are critical

8 Analysis of the loss function
Different variants of the method for mapping labels↔photos. Both Cycle alone and GAN + backward fail to produce images similar to the target domain. GAN alone and GAN + forward suffer from mode collapse, producing identical label maps regardless of the input photo.

9 Image reconstruction quality
The reconstructed images: very close to the original inputs x Mask: ensures that the predictions for position i can depend only on the known outputs at positions less than i

10 Additional results on paired datasets
Example results on paired datasets used in “pix2pix” quality is close to the fully supervised pix2pix while the method learns the mapping without paired supervision no recurrence pos is the position and i is the dimension The positional encodings have the same dimension dmodel as the embeddings PEpos+k can be represented as a linear function of PEpos

11 Applications floating point operations

12 Discussion Tasks involve color and texture changes: often succeeds
Tasks require geometric changes: little success In many cases: completely unpaired data is plentifully available.

13 Presenter: Hajar Emami
GANs for Biological Image Synthesis Presenter: Hajar Emami

14 Introduction A novel application of GAN:
synthesis of cells imaged by fluorescence microscopy The correlation between the spatial pattern of different fluorescent proteins reflects important biological functions Synthesized images have to capture these relationships for biological applications Casual dependencies between image channels: generate multichannel images, which is experimentally impossible

15 Model GAN, a minimax game between two models:
The generator aims to output images similar to the training set given random noise The discriminator aims to distinguish the output of the generator from the training set LIN dataset: 170,000 fluorescence microscopy images of cells Each image corresponds to a cell: composed of two independent fluorescence channels (red and green) Bgs4: red channel Subset of 6 factors among 41 different proteins: green channel 26,909 images of cells

16 Model Use the common information contained in the red channel:
Generate a cell with several of the green-labeled proteins together Modify the standard DCGAN: substituting the interdependence of the channels with the causal dependence of the green on the red Observe multiple modes of green signal for a single red Wasserstein GAN (WGAN-GP) objective

17 Architecture DCGAN generator (left)
Proposed separable generator (right) Separating the filters of the upconvolutional layers and features

18 Results lower C2ST scores correspond to better-looking: Sharper
less artifacts

19 Results Results of C2ST with WGAN-GP when comparing real images of different proteins.


Download ppt "Presenter: Hajar Emami"

Similar presentations


Ads by Google