A Unified and Biologically Plausible Relational Graph Representation of Vision Transformers

Yuzhong Chen, Zhenxiang Xiao, Yu Du, Lin Zhao, Lu Zhang, Zihao Wu, Dajiang Zhu, Tuo Zhang, Dezhong Yao, Xintao Hu, Tianming Liu, Xi Jiang

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

Vision transformer (ViT) and its variants have achieved remarkable success in various tasks. The key characteristic of these ViT models is to adopt different aggregation strategies of spatial patch information within the artificial neural networks (ANNs). However, there is still a key lack of unified representation of different ViT architectures for systematic understanding and assessment of model representation performance. Moreover, how those well-performing ViT ANNs are similar to real biological neural networks (BNNs) is largely unexplored. To answer these fundamental questions, we, for the first time, propose a unified and biologically plausible relational graph representation of ViT models. Specifically, the proposed relational graph representation consists of two key subgraphs: an aggregation graph and an affine graph. The former considers ViT tokens as nodes and describes their spatial interaction, while the latter regards network channels as nodes and reflects the information communication between channels. Using this unified relational graph representation, we found that: 1) model performance was closely related to graph measures; 2) the proposed relational graph representation of ViT has high similarity with real BNNs; and 3) there was a further improvement in model performance when training with a superior model to constrain the aggregation graph.

源语言英语
页(从-至)3231-3243
页数13
期刊IEEE Transactions on Neural Networks and Learning Systems
36
2
DOI
出版状态已出版 - 2025

指纹

探究 'A Unified and Biologically Plausible Relational Graph Representation of Vision Transformers' 的科研主题。它们共同构成独一无二的指纹。

引用此