Distribution Matching Losses Can Hallucinate Features in Medical Image Translation

@article{Cohen2018DistributionML,
  title={Distribution Matching Losses Can Hallucinate Features in Medical Image Translation},
  author={Joseph Paul Cohen and Margaux Luck and Sina Honari},
  journal={ArXiv},
  year={2018},
  volume={abs/1805.08841},
  url={https://api.semanticscholar.org/CorpusID:43919703}
}
This paper discusses how distribution matching losses, such as those used in CycleGAN, when used to synthesize medical images can lead to mis-diagnosis of medical conditions. It seems appealing to…

StegoGAN: Leveraging Steganography for Non-Bijective Image-to-Image Translation

StegoGAN is introduced, a novel model that leverages steganography to prevent spurious features in generated images and enhances the semantic consistency of the translated images without requiring ad- ditional postprocessing or supervision.

3C-GAN: class-consistent CycleGAN for malaria domain adaptation model

A modified distribution matching loss for CycleGAN is introduced to eliminate feature hallucination on the malaria dataset and it is believed that this approach will expedite the process of developing unsupervised unpaired GAN that is safe for clinical use.

A Practical Framework for Unsupervised Structure Preservation Medical Image Enhancement

A novel unsupervised GAN-based method called Laplacian medical image enhancement (LaMEGAN), which achieves a satisfactory balance between quality and originality, with robust structure preservation performance while generating compelling visual results with very high image quality scores.

Mutually Improved Endoscopic Image Synthesis and Landmark Detection in Unpaired Image-to-Image Translation

A task defined on these sparse landmark labels improves consistency of synthesis by the generator network in both domains and it could be shown that by dataset fusion, generated intra-operative images can be leveraged as additional training data for the detection network itself.

Anatomical Conditioning for Contrastive Unpaired Image-to-Image Translation of Optical Coherence Tomography Images

This work improves the segmentation of biomarkers in Home-OCT images in an unsupervised domain adaptation scenario and increases the similarity between the style-translated images and the target distribution.

Projected Distribution Loss for Image Enhancement

It is demonstrated that aggregating 1D-Wasserstein distances between CNN activations is more reliable than the existing approaches, and it can significantly improve the perceptual performance of enhancement models.

Towards semi-supervised segmentation via image-to-image translation

This work proposes a semi-supervised framework that employs image-to-image translation between weak labels (e.g., presence vs. absence of cancer) in addition to fully supervised segmentation on some examples, and re-use the encoder and decoders for translating in either direction between two domains, employing a strategy of selectively decoding domain-specific variations.

Similarity and quality metrics for MR image-to-image translation

This work quantitatively analyzes 11 similarity (reference) and 12 quality (non-reference) metrics for assessing synthetic images and investigates the sensitivity regarding 11 kinds of distortions and typical MR artifacts, and derives recommendations for effective usage of the analyzed similarity and quality metrics.

Hallucination Index: An Image Quality Metric for Generative Reconstruction Models

This work proposes a new image quality metric called the hallucination index, which could be useful for evaluation of generative image reconstructions or as a warning label to inform radiologists about the degree of hallucinations in medical images.

Harmonic Unpaired Image-to-image Translation

This paper develops HarmonicGAN to learn bi-directional translations between the source and the target domains, and turns CycleGAN from a failure to a success, halving the mean-squared error, and generating images that radiologists prefer over competing methods in 95% of cases.
...

Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks

This work presents an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples, and introduces a cycle consistency loss to push F(G(X)) ≈ X (and vice versa).

Image-to-Image Translation with Conditional Adversarial Networks

Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.

Unsupervised Image-to-Image Translation Networks

This work makes a shared-latent space assumption and proposes an unsupervised image-to-image translation framework based on Coupled GANs that achieves state-of-the-art performance on benchmark datasets.

Medical Image Synthesis with Context-Aware Generative Adversarial Networks

A fully convolutional network is trained to generate CT given the MR image to better model the nonlinear mapping from MRI to CT and produce more realistic images, and an image-gradient-difference based loss function is proposed to alleviate the blurriness of the generated CT.

Towards Virtual H&E Staining of Hyperspectral Lung Histology Images Using Conditional Generative Adversarial Networks

A method to virtually stain unstained Hematoxylin and eosin (H&E) specimens using dimension reduction and conditional adversarial generative networks (cGANs) which build highly non-linear mappings between input and output images.

Deep MR to CT Synthesis Using Unpaired Data

This work proposes to train a generative adversarial network (GAN) with unpaired MR and CT images to synthesize CT images that closely approximate reference CT images, and was able to outperform a GAN model trained with paired MR andCT images.

DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction

This paper provides a deep learning-based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets.

Compressed Sensing MRI Reconstruction Using a Generative Adversarial Network With a Cyclic Loss

It is demonstrated that the proposed novel deep learning-based generative adversarial model, RefineGAN, outperforms the state-of-the-art CS-MRI methods by a large margin in terms of both running time and image quality via evaluation using several open-source MRI databases.

Virtual PET Images from CT Data Using Deep Convolutional Networks: Initial Results

A novel system for PET estimation using CT scans using fully convolutional networks (FCN) and conditional generative adversarial networks (GAN) to export PET data from CT data is presented.

The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

The set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences are reported, finding that different algorithms worked best for different sub-regions, but that no single algorithm ranked in the top for all sub-Regions simultaneously.