[go: nahoru, domu]

Skip to content

DuAT: Dual-Aggregation Transformer Network for Medical Image Segmentation (PRCV)

Notifications You must be signed in to change notification settings

Barrett-python/DuAT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DuAT

Feilong Tang, Qiming Huang, Jinfeng Wang, Xianxu Hou, Jionglong Su, and Jingxin Liu

This repo is the official implementation of "DuAT: Dual-Aggregation Transformer Network for Medical Image Segmentation".

PWC PWC PWC PWC PWC PWC

1. Introduction

DuAT is initially described in PRCV.

Transformer-based models have been widely demon- strated to be successful in computer vision tasks by mod- elling long-range dependencies and capturing global rep- resentations. However, they are often dominated by fea- tures of large patterns leading to the loss of local details (e.g., boundaries and small objects), which are critical in medical image segmentation. To alleviate this problem, we propose a Dual-Aggregation Transformer Network called DuAT, which is characterized by two innovative designs, namely, the Global-to-Local Spatial Aggregation (GLSA) and Selective Boundary Aggregation (SBA) modules. The GLSA has the ability to aggregate and represent both global and local spatial features, which are beneficial for locat- ing large and small objects, respectively. The SBA mod- ule is used to aggregate the boundary characteristic from low-level features and semantic information from high-level features for better preserving boundary details and locat- ing the re-calibration objects. Extensive experiments in six benchmark datasets demonstrate that our proposed model outperforms state-of-the-art methods in the segmentation of skin lesion images, and polyps in colonoscopy images. In addition, our approach is more robust than existing meth- ods in various challenging situations such as small object segmentation and ambiguous object boundaries.

2. Framework Overview

3. Results

3.1 Image-level Polyp Segmentation

The polyp Segmentation prediction results in here.

4. Usage:

4.1 Recommended environment:

Python 3.8
Pytorch 1.7.1
torchvision 0.8.2

4.2 Data preparation:

Downloading training and testing datasets and move them into ./dataset/, which can be found in this Google Drive/Baidu Drive [code:dr1h].

4.3 Pretrained model:

You should download the pretrained model from Google Drive/Baidu Drive [code:w4vk], and then put it in the './pretrained_pth' folder for initialization.

4.4 Training:

Clone the repository:

git clone https://github.com/Barrett-python/DuAT.git
cd DuAT
bash train.sh

4.5 Testing:

cd DuAT
bash test.sh

4.6 Evaluating your trained model:

Matlab: Please refer to the work of MICCAI2020 (link).

Python: Please refer to the work of ACMMM2021 (link).

Please note that we use the Matlab version to evaluate in our paper.

4.7 Well trained model:

You could download the trained model from Google Drive and put the model in directory './model_pth'.

Citation If you find this code or idea useful, please cite our work:

Citation:

@inproceedings{tang2023duat,
  title={DuAT: Dual-aggregation transformer network for medical image segmentation},
  author={Tang, Feilong and Xu, Zhongxing and Huang, Qiming and Wang, Jinfeng and Hou, Xianxu and Su, Jionglong and Liu, Jingxin},
  booktitle={Chinese Conference on Pattern Recognition and Computer Vision (PRCV)},
  pages={343--356},
  year={2023},
  organization={Springer}
}

6. Acknowledgement

We are very grateful for these excellent works PraNet, Polyp-PVT and SSformer, which have provided the basis for our framework.

About

DuAT: Dual-Aggregation Transformer Network for Medical Image Segmentation (PRCV)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published