Two-dimensional phase unwrapping (PU) is a classical ill-posed problem in synthetic aperture radar interferometry (InSAR). The traditional algorithmic model-based 2-D PU methods are limited by the Itoh condition, which is from the PU researchers' experience and has critical challenges under strong phase noises or violent phase changes. Recently, advanced learning-based 2-D PU methods could break through the limitation of the Itoh condition owing to their data-driven frameworks, offering promising results in terms of both the speed and accuracy. The one-step learning-based PU method, as one of the representatives, retrieves the unwrapped phase directly from the wrapped phase through regression. However, the main disadvantage of one-step learning-based PU is that it usually blurs the output unwrapped phase due to its $L_{2}$ loss, that is, it cannot guarantee the congruency between the rewrapped interferometric fringes of the PU solution and the input interferogram. To solve this problem, we propose a one-step 2-D PU method based on the conditional generative adversarial network (referred to as PU-GAN), which treats 2-D PU as an image-to-image translation problem. The generator in PU-GAN can be trained to generate the unwrapped phase through minimizing a $L_{1}$ -norm loss based on a U-Net architecture, while simultaneously the corresponding discriminator can learn an adversarial loss by a structure of Patch-GAN that tries to classify if the output unwrapped phase image is real or fake. Both a theoretical analysis and the experimental results show that the proposed method outperforms the representative algorithmic model-based and learning-based 2-D PU methods.

PU-GAN: A One-Step 2-D InSAR Phase Unwrapping Based on Conditional Generative Adversarial Network

Pascazio V.
;
2022-01-01

Abstract

Two-dimensional phase unwrapping (PU) is a classical ill-posed problem in synthetic aperture radar interferometry (InSAR). The traditional algorithmic model-based 2-D PU methods are limited by the Itoh condition, which is from the PU researchers' experience and has critical challenges under strong phase noises or violent phase changes. Recently, advanced learning-based 2-D PU methods could break through the limitation of the Itoh condition owing to their data-driven frameworks, offering promising results in terms of both the speed and accuracy. The one-step learning-based PU method, as one of the representatives, retrieves the unwrapped phase directly from the wrapped phase through regression. However, the main disadvantage of one-step learning-based PU is that it usually blurs the output unwrapped phase due to its $L_{2}$ loss, that is, it cannot guarantee the congruency between the rewrapped interferometric fringes of the PU solution and the input interferogram. To solve this problem, we propose a one-step 2-D PU method based on the conditional generative adversarial network (referred to as PU-GAN), which treats 2-D PU as an image-to-image translation problem. The generator in PU-GAN can be trained to generate the unwrapped phase through minimizing a $L_{1}$ -norm loss based on a U-Net architecture, while simultaneously the corresponding discriminator can learn an adversarial loss by a structure of Patch-GAN that tries to classify if the output unwrapped phase image is real or fake. Both a theoretical analysis and the experimental results show that the proposed method outperforms the representative algorithmic model-based and learning-based 2-D PU methods.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11367/105158
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 35
  • ???jsp.display-item.citation.isi??? 22
social impact