Vision Transformer系列参考论文

上传者: 40721108 | 上传时间: 2022-09-13 09:07:15 | 文件大小: 216.64MB | 文件类型: ZIP
Vision Transformer系列参考论文,大多数都是热门论文,包含CVPR,NIPS顶会以及顶刊论文,欢迎大家学习

文件下载

资源详情

[{"title":"( 48 个子文件 216.64MB ) Vision Transformer系列参考论文","children":[{"title":"VIT_2022.08.02","children":[{"title":"2020-ConvBERT Improving BERT with Span-based Dynamic Convolution.pdf <span style='color:#111;'> 1.61MB </span>","children":null,"spread":false},{"title":"2017-Attention Is All Y ou Need.pdf <span style='color:#111;'> 556.07KB </span>","children":null,"spread":false},{"title":"2021-Conformer Local Features Coupling Global Representations for Visual Recognition.pdf <span style='color:#111;'> 1.28MB </span>","children":null,"spread":false},{"title":"2021-Swin Transformer Hierarchical Vision Transformer using Shifted Windows.pdf <span style='color:#111;'> 1.30MB </span>","children":null,"spread":false},{"title":"2022-Deep learning for image colorization Current and future prospects.pdf <span style='color:#111;'> 9.74MB </span>","children":null,"spread":false},{"title":"2022-Uformer A General U-Shaped Transformer for Image Restoration.pdf <span style='color:#111;'> 3.11MB </span>","children":null,"spread":false},{"title":"2021-LeViT a Vision Transformer in ConvNet’s Clothing for Faster Inference.pdf <span style='color:#111;'> 716.39KB </span>","children":null,"spread":false},{"title":"2020-An Image is Worth 16x16 Words Transformers for Image Recognition at Scale.pdf <span style='color:#111;'> 3.57MB </span>","children":null,"spread":false},{"title":"2018-BERT Pre-training of Deep Bidirectional Transformers for Language Understanding.pdf <span style='color:#111;'> 757.00KB </span>","children":null,"spread":false},{"title":"2022-MOBILEVIT LIGHT-WEIGHT, GENERAL-PURPOSE, AND MOBILE-FRIENDLY VISION TRANSFORMER.pdf <span style='color:#111;'> 10.11MB </span>","children":null,"spread":false},{"title":"2021- COLORIZATION TRANSFORMER.pdf <span style='color:#111;'> 31.14MB </span>","children":null,"spread":false},{"title":"2022-Mobile-Former Bridging MobileNet and Transformer.pdf <span style='color:#111;'> 1.12MB </span>","children":null,"spread":false},{"title":"2022-CSWin Transformer A General Vision Transformer Backbone with Cross-Shaped Windows.pdf <span style='color:#111;'> 1.10MB </span>","children":null,"spread":false},{"title":"2022-CMT Convolutional Neural Networks Meet Vision Transformers.pdf <span style='color:#111;'> 763.88KB </span>","children":null,"spread":false},{"title":"2021-TransGAN Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up.pdf <span style='color:#111;'> 27.96MB </span>","children":null,"spread":false},{"title":"2021-Compositional Transformers for Scene Generation.pdf <span style='color:#111;'> 2.69MB </span>","children":null,"spread":false},{"title":"2021-Tokens-to-Token ViT Training Vision Transformers from Scratch on ImageNet.pdf <span style='color:#111;'> 10.27MB </span>","children":null,"spread":false},{"title":"2022-InstaFormer Instance-Aware Image-to-Image Translation with Transformer.pdf <span style='color:#111;'> 9.43MB </span>","children":null,"spread":false},{"title":"2022-Transformers in Vision A Survey.pdf <span style='color:#111;'> 7.30MB </span>","children":null,"spread":false},{"title":"2022-PVT v2 Improved baselines with Pyramid Vision Transformer.pdf <span style='color:#111;'> 1005.90KB </span>","children":null,"spread":false},{"title":"2022-Styleformer Transformer based Generative Adversarial Networks with Style.pdf <span style='color:#111;'> 7.49MB </span>","children":null,"spread":false},{"title":"2022-A Survey of Visual Transformers.pdf <span style='color:#111;'> 4.46MB </span>","children":null,"spread":false},{"title":"2021-Transformer in Transformer.pdf <span style='color:#111;'> 5.29MB </span>","children":null,"spread":false},{"title":"2021-Pyramid Vision Transformer A Versatile Backbone for Dense Prediction without Convolutions.pdf <span style='color:#111;'> 729.19KB </span>","children":null,"spread":false},{"title":"2022-Vision Transformer with Deformable Attention.pdf <span style='color:#111;'> 1.85MB </span>","children":null,"spread":false},{"title":"2021-CvT Introducing Convolutions to Vision Transformers.pdf <span style='color:#111;'> 1.07MB </span>","children":null,"spread":false},{"title":"2021-Generative Adversarial Transformers.pdf <span style='color:#111;'> 2.55MB </span>","children":null,"spread":false},{"title":"2022-PatchFormer An Efficient Point Transformer with Patch Attention.pdf <span style='color:#111;'> 2.76MB </span>","children":null,"spread":false},{"title":"2022-StyleSwin Transformer-based GAN for High-resolution Image Generation.pdf <span style='color:#111;'> 3.86MB </span>","children":null,"spread":false},{"title":"2022-ITTR Unpaired Image-to-Image Translation with Transformers.pdf <span style='color:#111;'> 4.16MB </span>","children":null,"spread":false},{"title":"2022-StyTr2Image Style Transfer with Transformers.pdf <span style='color:#111;'> 9.79MB </span>","children":null,"spread":false},{"title":"2021-Rethinking Spatial Dimensions of Vision Transformers.pdf <span style='color:#111;'> 657.80KB </span>","children":null,"spread":false},{"title":"2022-MPViT Multi-Path Vision Transformer for Dense Prediction.pdf <span style='color:#111;'> 5.83MB </span>","children":null,"spread":false},{"title":"2022-Towards Robust Vision Transformer.pdf <span style='color:#111;'> 1.97MB </span>","children":null,"spread":false},{"title":"2022-A Survey on Vision T ransformer.pdf <span style='color:#111;'> 2.63MB </span>","children":null,"spread":false},{"title":"2022-Restormer Efficient Transformer for High-Resolution Image Restoration.pdf <span style='color:#111;'> 7.28MB </span>","children":null,"spread":false},{"title":"2022-CT2 Colorization Transformer via Color Tokens.pdf <span style='color:#111;'> 8.89MB </span>","children":null,"spread":false},{"title":"2021-ViTGAN Training GANs with Vision Transformers.pdf <span style='color:#111;'> 3.34MB </span>","children":null,"spread":false},{"title":"2022-On the Integration of Self-Attention and Convolution.pdf <span style='color:#111;'> 1.32MB </span>","children":null,"spread":false},{"title":"2022-SCENE TRANSFORMER A UNIFIED ARCHITECTURE FOR PREDICTING MULTIPLE AGENT TRAJECTORIES.pdf <span style='color:#111;'> 1.39MB </span>","children":null,"spread":false},{"title":"2021-Twins Revisiting the Design of Spatial Attention in Vision Transformers.pdf <span style='color:#111;'> 360.88KB </span>","children":null,"spread":false},{"title":"2022-Transformer for Single Image Super-Resolution.pdf <span style='color:#111;'> 8.69MB </span>","children":null,"spread":false},{"title":"2022-ParC-Net Position Aware Circular Convolution with Merits from ConvNets and Transformer.pdf <span style='color:#111;'> 2.29MB </span>","children":null,"spread":false},{"title":"2022-Attention mechanisms in computer vision A survey.pdf <span style='color:#111;'> 2.74MB </span>","children":null,"spread":false},{"title":"2021-Swinir Image restoration using swin transformer.pdf <span style='color:#111;'> 6.56MB </span>","children":null,"spread":false},{"title":"2022-Exploring Plain Vision Transformer Backbones for Object Detection.pdf <span style='color:#111;'> 918.81KB </span>","children":null,"spread":false},{"title":"2021-Yes, Attention Is All You Need, for Exemplar based Colorization.pdf <span style='color:#111;'> 3.40MB </span>","children":null,"spread":false},{"title":"2021-CROSSFORMER A VERSATILE VISION TRANSFORMER HINGING ON CROSS-SCALE ATTENTION.pdf <span style='color:#111;'> 2.51MB </span>","children":null,"spread":false}],"spread":false}],"spread":true}]

评论信息

免责申明

【只为小站】的资源来自网友分享,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,【只为小站】 无法对用户传输的作品、信息、内容的权属或合法性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论 【只为小站】 经营者是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。
本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二条之规定,若资源存在侵权或相关问题请联系本站客服人员,zhiweidada#qq.com,请把#换成@,本站将给予最大的支持与配合,做到及时反馈和处理。关于更多版权及免责申明参见 版权及免责申明