TY - JOUR
T1 - A Unified and Biologically Plausible Relational Graph Representation of Vision Transformers
AU - Chen, Yuzhong
AU - Xiao, Zhenxiang
AU - Du, Yu
AU - Zhao, Lin
AU - Zhang, Lu
AU - Wu, Zihao
AU - Zhu, Dajiang
AU - Zhang, Tuo
AU - Yao, Dezhong
AU - Hu, Xintao
AU - Liu, Tianming
AU - Jiang, Xi
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2025
Y1 - 2025
N2 - Vision transformer (ViT) and its variants have achieved remarkable success in various tasks. The key characteristic of these ViT models is to adopt different aggregation strategies of spatial patch information within the artificial neural networks (ANNs). However, there is still a key lack of unified representation of different ViT architectures for systematic understanding and assessment of model representation performance. Moreover, how those well-performing ViT ANNs are similar to real biological neural networks (BNNs) is largely unexplored. To answer these fundamental questions, we, for the first time, propose a unified and biologically plausible relational graph representation of ViT models. Specifically, the proposed relational graph representation consists of two key subgraphs: an aggregation graph and an affine graph. The former considers ViT tokens as nodes and describes their spatial interaction, while the latter regards network channels as nodes and reflects the information communication between channels. Using this unified relational graph representation, we found that: 1) model performance was closely related to graph measures; 2) the proposed relational graph representation of ViT has high similarity with real BNNs; and 3) there was a further improvement in model performance when training with a superior model to constrain the aggregation graph.
AB - Vision transformer (ViT) and its variants have achieved remarkable success in various tasks. The key characteristic of these ViT models is to adopt different aggregation strategies of spatial patch information within the artificial neural networks (ANNs). However, there is still a key lack of unified representation of different ViT architectures for systematic understanding and assessment of model representation performance. Moreover, how those well-performing ViT ANNs are similar to real biological neural networks (BNNs) is largely unexplored. To answer these fundamental questions, we, for the first time, propose a unified and biologically plausible relational graph representation of ViT models. Specifically, the proposed relational graph representation consists of two key subgraphs: an aggregation graph and an affine graph. The former considers ViT tokens as nodes and describes their spatial interaction, while the latter regards network channels as nodes and reflects the information communication between channels. Using this unified relational graph representation, we found that: 1) model performance was closely related to graph measures; 2) the proposed relational graph representation of ViT has high similarity with real BNNs; and 3) there was a further improvement in model performance when training with a superior model to constrain the aggregation graph.
KW - Artificial neural network (ANN)
KW - biological neural network (BNN)
KW - relational graph
KW - vision transformer (ViT)
UR - http://www.scopus.com/inward/record.url?scp=85181581900&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2023.3342810
DO - 10.1109/TNNLS.2023.3342810
M3 - 文章
AN - SCOPUS:85181581900
SN - 2162-237X
VL - 36
SP - 3231
EP - 3243
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 2
ER -