TY - GEN
T1 - Cross-Domain Infrared Image Classification via Image-to-Image Translation and Deep Domain Generalization
AU - Guo, Zhao Rui
AU - Niu, Jia Wei
AU - Liu, Zhun Ga
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - In target recognition, the information about the target usually exists in several domains captured by different sources (sensors). However, it is difficult for us to obtain the perfect target information as the source domain data due to the sensors' limitations sometimes. For the target classification of visible and infrared paired images, we assume that some classes of visible and infrared paired images and other classes of visible images can be obtained, whereas other classes of unseen infrared images need to be classified. This problem is actually a zero-shot deep domain adaptation (ZDDA) problem which divides the data into task-relevant (T-R) data and task-irrelevant (T-I) data. Moreover, the classes of T-R data require recognition, while the classes of T-I data do not need. The traditional ZDDA method sacrifices the classification accuracy of T-R data in the target domain for the generalization ability of T-I data in the source domain. So we propose a method to solve the problem in another way. More precisely, we first use the image-to-image translation network to learn the mapping between the source domain (visible images) T-I data and the target domain (infrared images) T-I data, and convert the visible T-R images to pseudo-infrared images. Then the pseudo-infrared images and the inverted grayscale T-R images are combined to construct a new hybrid domain (source domain I). Meanwhile, we also construct a hybrid domain (source domain II) of T-I images similarly. Besides, we use the infrared T-I images to construct the third domain (source domain III). Finally, we design a deep domain generalization method for cross-domain infrared image classification. And the total loss consists of the classification loss of the source domain I and the distribution alignment loss between the source domains II and III. We evaluate our method using VAIS ship and RGB-NIR scene datasets. The experimental results demonstrate the effectiveness of the proposed method.
AB - In target recognition, the information about the target usually exists in several domains captured by different sources (sensors). However, it is difficult for us to obtain the perfect target information as the source domain data due to the sensors' limitations sometimes. For the target classification of visible and infrared paired images, we assume that some classes of visible and infrared paired images and other classes of visible images can be obtained, whereas other classes of unseen infrared images need to be classified. This problem is actually a zero-shot deep domain adaptation (ZDDA) problem which divides the data into task-relevant (T-R) data and task-irrelevant (T-I) data. Moreover, the classes of T-R data require recognition, while the classes of T-I data do not need. The traditional ZDDA method sacrifices the classification accuracy of T-R data in the target domain for the generalization ability of T-I data in the source domain. So we propose a method to solve the problem in another way. More precisely, we first use the image-to-image translation network to learn the mapping between the source domain (visible images) T-I data and the target domain (infrared images) T-I data, and convert the visible T-R images to pseudo-infrared images. Then the pseudo-infrared images and the inverted grayscale T-R images are combined to construct a new hybrid domain (source domain I). Meanwhile, we also construct a hybrid domain (source domain II) of T-I images similarly. Besides, we use the infrared T-I images to construct the third domain (source domain III). Finally, we design a deep domain generalization method for cross-domain infrared image classification. And the total loss consists of the classification loss of the source domain I and the distribution alignment loss between the source domains II and III. We evaluate our method using VAIS ship and RGB-NIR scene datasets. The experimental results demonstrate the effectiveness of the proposed method.
UR - http://www.scopus.com/inward/record.url?scp=85146742620&partnerID=8YFLogxK
U2 - 10.1109/ICARCV57592.2022.10004308
DO - 10.1109/ICARCV57592.2022.10004308
M3 - 会议稿件
AN - SCOPUS:85146742620
T3 - 2022 17th International Conference on Control, Automation, Robotics and Vision, ICARCV 2022
SP - 487
EP - 493
BT - 2022 17th International Conference on Control, Automation, Robotics and Vision, ICARCV 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 17th International Conference on Control, Automation, Robotics and Vision, ICARCV 2022
Y2 - 11 December 2022 through 13 December 2022
ER -