Abstract
Domain adaptation techniques have been demonstrated to be effective in addressing label deficiency challenges in medical image segmentation. However, conventional domain adaptation based approaches often concentrate on matching global marginal distributions between different domains in a class-agnostic fashion. In this paper, we present a dual-attention domain-adaptative segmentation network (DADASeg-Net) for cross-modality medical image segmentation. The key contribution of DADASeg-Net is a novel dual adversarial attention mechanism, which regularizes the domain adaptation module with two attention maps respectively from the space and class perspectives. Specifically, the spatial attention map guides the domain adaptation module to focus on regions that are challenging to align in adaptation. The class attention map encourages the domain adaptation module to capture class-specific instead of class-agnostic knowledge for distribution alignment. DADASeg-Net shows superior performance in two challenging medical image segmentation tasks.
Original language | English (US) |
---|---|
Pages (from-to) | 3445-3453 |
Number of pages | 9 |
Journal | IEEE Transactions on Medical Imaging |
Volume | 41 |
Issue number | 11 |
DOIs | |
State | Published - Nov 1 2022 |
Keywords
- Attention mechanism
- adversarial learning
- medical image segmentation
- unsupervised domain adaptation
ASJC Scopus subject areas
- Software
- Radiological and Ultrasound Technology
- Computer Science Applications
- Electrical and Electronic Engineering