Presenter: Hajar Emami

Slides:



Advertisements
Similar presentations
Zhimin CaoThe Chinese University of Hong Kong Qi YinITCS, Tsinghua University Xiaoou TangShenzhen Institutes of Advanced Technology Chinese Academy of.
Advertisements

Foreground Focus: Finding Meaningful Features in Unlabeled Images Yong Jae Lee and Kristen Grauman University of Texas at Austin.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
On the Relationship between Visual Attributes and Convolutional Networks Paper ID - 52.
Introduction to Image Quality Assessment
Jeff Howbert Introduction to Machine Learning Winter Machine Learning Feature Creation and Selection.
Longbiao Kang, Baotian Hu, Xiangping Wu, Qingcai Chen, and Yan He Intelligent Computing Research Center, School of Computer Science and Technology, Harbin.
Computer Go : A Go player Rohit Gurjar CS365 Project Presentation, IIT Kanpur Guided By – Prof. Amitabha Mukerjee.
Indirect Supervision Protocols for Learning in Natural Language Processing II. Learning by Inventing Binary Labels This work is supported by DARPA funding.
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Inferring Regulatory Networks from Gene Expression Data BMI/CS 776 Mark Craven April 2002.
Big data classification using neural network
Convolutional Sequence to Sequence Learning
Unsupervised Learning of Video Representations using LSTMs
CS273B: Deep learning for Genomics and Biomedicine
Convolutional Neural Network
Environment Generation with GANs
Summary of “Efficient Deep Learning for Stereo Matching”
Deep Neural Net Scenery Generation
Compact Bilinear Pooling
Fast Preprocessing for Robust Face Sketch Synthesis
RIVER SEGMENTATION FOR FLOOD MONITORING
Neural networks (3) Regularization Autoencoder
Presenter: Hajar Emami
CS6890 Deep Learning Weizhen Cai
Authors: Jun-Yan Zhu*, Taesun Park*, Phillip Isola, Alexei A. Efros
Machine Learning Feature Creation and Selection
Deep Learning and Newtonian Physics
Adversarially Tuned Scene Generation
Image Question Answering
Attention Is All You Need
Low Dose CT Image Denoising Using WGAN and Perceptual Loss
Fully Convolutional Networks for Semantic Segmentation
Human-level control through deep reinforcement learning
Outline S. C. Zhu, X. Liu, and Y. Wu, “Exploring Texture Ensembles by Efficient Markov Chain Monte Carlo”, IEEE Transactions On Pattern Analysis And Machine.
SBNet: Sparse Blocks Network for Fast Inference
Deep Learning Hierarchical Representations for Image Steganalysis
Image to Image Translation using GANs
GAN Applications.
Papers 15/08.
Visualizing and Understanding Convolutional Networks
Lip movement Synthesis from Text
Yang Liu, Perry Palmedo, Qing Ye, Bonnie Berger, Jian Peng 
Analysis of Trained CNN (Receptive Field & Weights of Network)
View Inter-Prediction GAN: Unsupervised Representation Learning for 3D Shapes by Learning Global Shape Memories to Support Local View Predictions 1,2 1.
Introduction to Object Tracking
Machine learning overview
Neural networks (3) Regularization Autoencoder
Word embeddings (continued)
CSC321: Neural Networks Lecture 11: Learning in recurrent networks
LOGAN: Unpaired Shape Transform in Latent Overcomplete Space
Lecture 09: Introduction Image Recognition using Neural Networks
Human-object interaction
Deep Object Co-Segmentation
Learning Compositional Visual Concepts with Mutual Consistency
Learning and Memorization
Image recognition.
Ch4: Backpropagation (BP)
Model Selection in Parameterizing Cell Images and Populations
Review and Importance CS 111.
Week 3 Presentation Ngoc Ta Aidean Sharghi.
Weak-supervision based Multi-Object Tracking
Cengizhan Can Phoebe de Nooijer
Example of training and deployment of deep convolutional neural networks. Example of training and deployment of deep convolutional neural networks. During.
Self-Supervised Cross-View Action Synthesis
End-to-End Speech-Driven Facial Animation with Temporal GANs
Da-Rong Liu, Kuan-Yu Chen, Hung-Yi Lee, Lin-shan Lee
Deep screen image crop and enhance
Shengcong Chen, Changxing Ding, Minfeng Liu 2018
Presentation transcript:

Presenter: Hajar Emami Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks Presenter: Hajar Emami

Agenda Background Motivation Model Experimental results Discussion Image-to-image translation: learn the mapping between an input image and an output image Training set of aligned image pairs Motivation For many tasks, paired data is not available. Model Experimental results Discussion

Image to-image translation converting an image from one representation, x, to another, y, grayscale to color image to semantic labels, ….. obtaining paired data can be difficult and expensive. English->French

Unpaired image to-image translation Goal: learn a mapping G : X → Y, Inverse mapping F : Y → X G and F should be inverses of each other Combine a cycle consistency loss to enforce F(G(x)) ≈ x and G(F(y)) ≈ y, with adversarial losses Translate from one domain to the other and back again: arrive at start Forward cycle-consistency loss: x → G(x) → F(G(x)) ≈ x, y → F(y) → G(F(y)) ≈ y

Formulation Adversarial losses: matching the distribution of generated images to the target domain Cycle consistency losses: prevent the learned mappings G and F from contradicting each other Adversarial losses: Cycle Consistency Loss: Full Objective:

Implementation Network Architecture: two stride-2 convolutions several residual blocks two fractionally strided convolutions French->English

Analysis of the loss function Removing the GAN loss or the cycle-consistency : degrades results both terms are critical

Analysis of the loss function Different variants of the method for mapping labels↔photos. Both Cycle alone and GAN + backward fail to produce images similar to the target domain. GAN alone and GAN + forward suffer from mode collapse, producing identical label maps regardless of the input photo.

Image reconstruction quality The reconstructed images: very close to the original inputs x Mask: ensures that the predictions for position i can depend only on the known outputs at positions less than i

Additional results on paired datasets Example results on paired datasets used in “pix2pix” quality is close to the fully supervised pix2pix while the method learns the mapping without paired supervision no recurrence pos is the position and i is the dimension The positional encodings have the same dimension dmodel as the embeddings PEpos+k can be represented as a linear function of PEpos

Applications floating point operations

Discussion Tasks involve color and texture changes: often succeeds Tasks require geometric changes: little success In many cases: completely unpaired data is plentifully available.

Presenter: Hajar Emami GANs for Biological Image Synthesis Presenter: Hajar Emami

Introduction A novel application of GAN: synthesis of cells imaged by fluorescence microscopy The correlation between the spatial pattern of different fluorescent proteins reflects important biological functions Synthesized images have to capture these relationships for biological applications Casual dependencies between image channels: generate multichannel images, which is experimentally impossible

Model GAN, a minimax game between two models: The generator aims to output images similar to the training set given random noise The discriminator aims to distinguish the output of the generator from the training set LIN dataset: 170,000 fluorescence microscopy images of cells Each image corresponds to a cell: composed of two independent fluorescence channels (red and green) Bgs4: red channel Subset of 6 factors among 41 different proteins: green channel 26,909 images of cells

Model Use the common information contained in the red channel: Generate a cell with several of the green-labeled proteins together Modify the standard DCGAN: substituting the interdependence of the channels with the causal dependence of the green on the red Observe multiple modes of green signal for a single red Wasserstein GAN (WGAN-GP) objective

Architecture DCGAN generator (left) Proposed separable generator (right) Separating the filters of the upconvolutional layers and features

Results lower C2ST scores correspond to better-looking: Sharper less artifacts

Results Results of C2ST with WGAN-GP when comparing real images of different proteins.