I put this paper into 4o so i can check if it is relevant, so that you do not have to do this too here are the bullet points:<p>- Vision Transformers can be parallelized to reduce latency and improve optimization without sacrificing accuracy.<p>- Fine-tuning only the attention layers is often sufficient for adapting ViTs to new tasks or resolutions, saving compute and memory.<p>- Using MLP-based patch preprocessing improves performance in masked self-supervised learning by preserving patch independence.