Improve Vision Transformers Training by Suppressing Over-smoothing

通过抑制过度平滑改进视觉Transformer.

Twins: Revisiting the Design of Spatial Attention in Vision Transformers

Twins:重新思考视觉Transformer中的空间注意力设计.

All Tokens Matter: Token Labeling for Training Better Vision Transformers

LV-ViT:使用标志标签更好地训练视觉Transformers.

Incorporating Convolution Designs into Visual Transformers

CeiT:将卷积设计整合到视觉Transformers中.

CvT: Introducing Convolutions to Vision Transformers

CvT:向视觉Transformer中引入卷积.

Per-Pixel Classification is Not All You Need for Semantic Segmentation

MaskFormer:逐像素分类并不是语义分割所必需的.