Inproceedings,

Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention at Vision Transformer Inference.

, , , , , , , and .
CVPR, page 14431-14442. IEEE, (2023)

Meta data

Tags

Users

  • @dblp

Comments and Reviews