TY - GEN
T1 - $$\alpha $$ -UNet++
T2 - 2nd MICCAI Workshop on Domain Adaptation and Representation Transfer, DART 2020, and the 1st MICCAI Workshop on Distributed and Collaborative Learning, DCL 2020, held in conjunction with the Medical Image Computing and Computer Assisted Intervention, MICCAI 2020
AU - Chen, Yaxin
AU - Ma, Benteng
AU - Xia, Yong
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - UNet++, an encoder-decoder architecture constructed based on the famous UNet, has achieved state-of-the-art results on many medical image segmentation tasks. Despite improved performance, UNet++ introduces densely connected decoding blocks, some of which, however, are redundant for a specific task. In this paper, we propose-UNet++ that allows us to automatically identify and discard redundant decoding blocks without the loss of precision. To this end, we design an auxiliary indicator function layer to compress the network architecture via removing a decoding block, in which all individual responses are less than a given threshold. We evaluated the segmentation architecture obtained respectively for liver segmentation and nuclei segmentation, denoted by UNet++, against UNet and UNet++. Comparing to UNet++, our UNet++ reduces the parameters by 18.89% in liver segmentation and 34.17% in nuclei segmentation, yielding an average improvement of IoU by 0.27% and 0.11% on two tasks. Our results suggest that the UNet++ produced by the proposed-UNet++ not only improves the segmentation accuracy slightly but also reduces the model complexity considerably.
AB - UNet++, an encoder-decoder architecture constructed based on the famous UNet, has achieved state-of-the-art results on many medical image segmentation tasks. Despite improved performance, UNet++ introduces densely connected decoding blocks, some of which, however, are redundant for a specific task. In this paper, we propose-UNet++ that allows us to automatically identify and discard redundant decoding blocks without the loss of precision. To this end, we design an auxiliary indicator function layer to compress the network architecture via removing a decoding block, in which all individual responses are less than a given threshold. We evaluated the segmentation architecture obtained respectively for liver segmentation and nuclei segmentation, denoted by UNet++, against UNet and UNet++. Comparing to UNet++, our UNet++ reduces the parameters by 18.89% in liver segmentation and 34.17% in nuclei segmentation, yielding an average improvement of IoU by 0.27% and 0.11% on two tasks. Our results suggest that the UNet++ produced by the proposed-UNet++ not only improves the segmentation accuracy slightly but also reduces the model complexity considerably.
KW - Auxiliary indicator function
KW - Medical image segmentation
KW - Network compression
KW - UNet++
UR - http://www.scopus.com/inward/record.url?scp=85092163425&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-60548-3_1
DO - 10.1007/978-3-030-60548-3_1
M3 - 会议稿件
AN - SCOPUS:85092163425
SN - 9783030605476
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 3
EP - 12
BT - Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning - 2nd MICCAI Workshop, DART 2020, and 1st MICCAI Workshop, DCL 2020, Held in Conjunction with MICCAI 2020, Proceedings
A2 - Albarqouni, Shadi
A2 - Bakas, Spyridon
A2 - Kamnitsas, Konstantinos
A2 - Cardoso, M. Jorge
A2 - Landman, Bennett
A2 - Li, Wenqi
A2 - Milletari, Fausto
A2 - Rieke, Nicola
A2 - Roth, Holger
A2 - Xu, Daguang
A2 - Xu, Ziyue
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 4 October 2020 through 8 October 2020
ER -