摘要
Computer-aided diagnosis (CAD) technology has been widely used in the early diagnosis of breast cancer. Nowadays, most of the existing breast ultrasound classification methods need to crop a tumor-centered image (TCI) on each image as the input of the system. These methods ignore the fact that the tumor as well as its surrounding tissues can actually be viewed from multiple aspects, and it is difficult to extract multi-resolution information applying only a single view image. In addition, the current methods do not effectively extract fine-grained features, and subtle details play an important role in breast classification. In our research, we propose a novel strategy to generate multi-resolution TCIs in a single ultrasound image, resulting in a multi-data-input learning task. Hence, a conventional single image based learning task is converted into a multi-view learning task, and an improved combined style fusion method suitable for a deep network is proposed, which integrates the advantage of the decision-based and feature-based methods to fuse the information of different views. At the same time, we first attempt to introduce the fine-grained classification method into breast classifications and capture the pairwise correlation between feature channels at each position to extract subtle information. The comparative experimental results show that our method can effectively improve the classification performance and achieves the best results in five metrics.
源语言 | 英语 |
---|---|
文章编号 | 109776 |
期刊 | Pattern Recognition |
卷 | 143 |
DOI | |
出版状态 | 已出版 - 11月 2023 |